• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 89
  • 21
  • 13
  • 8
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 173
  • 173
  • 30
  • 28
  • 23
  • 23
  • 21
  • 19
  • 17
  • 16
  • 15
  • 15
  • 14
  • 13
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

Bayesian Networks to Support Decision-Making for Immune-Checkpoint Blockade in Recurrent/Metastatic (R/M) Head and Neck Squamous Cell Carcinoma (HNSCC)

Huehn, Marius, Gaebel, Jan, Oeser, Alexander, Dietz, Andreas, Neumuth, Thomas, Wichmann, Gunnar, Stoehr, Matthaeus 02 May 2023 (has links)
New diagnostic methods and novel therapeutic agents spawn additional and heterogeneous information, leading to an increasingly complex decision-making process for optimal treatment of cancer. A great amount of information is collected in organ-specific multidisciplinary tumor boards (MDTBs). By considering the patient’s tumor properties, molecular pathological test results, and comorbidities, the MDTB has to consent an evidence-based treatment decision. Immunotherapies are increasingly important in today’s cancer treatment, resulting in detailed information that influences the decision-making process. Clinical decision support systems can facilitate a better understanding via processing of multiple datasets of oncological cases and molecular genetic information, potentially fostering transparency and comprehensibility of available information, eventually leading to an optimum treatment decision for the individual patient. We constructed a digital patient model based on Bayesian networks to combine the relevant patient-specific and molecular data with depended probabilities derived from pertinent studies and clinical guidelines to calculate treatment decisions in head and neck squamous cell carcinoma (HNSCC). In a validation analysis, the model can provide guidance within the growing subject of immunotherapy in HNSCC and, based on its ability to calculate reliable probabilities, facilitates estimation of suitable therapy options. We compared actual treatment decisions of 25 patients with the calculated recommendations of our model and found significant concordance (Cohen’s κ = 0.505, p = 0.009) and 84% accuracy.
152

MCMC estimation of causal VAE architectures with applications to Spotify user behavior / MCMC uppskattning av kausala VAE arkitekturer med tillämpningar på Spotify användarbeteende

Harting, Alice January 2023 (has links)
A common task in data science at internet companies is to develop metrics that capture aspects of the user experience. In this thesis, we are interested in systems of measurement variables without direct causal relations such that covariance is explained by unobserved latent common causes. A framework for modeling the data generating process is given by Neuro-Causal Factor Analysis (NCFA). The graphical model consists of a directed graph with edges pointing from the latent common causes to the measurement variables; its functional relations are approximated with a constrained Variational Auto-Encoder (VAE). We refine the estimation of the graphical model by developing an MCMC algorithm over Bayesian networks from which we read marginal independence relations between the measurement variables. Unlike standard independence testing, the method is guaranteed to yield an identifiable graphical model. Our algorithm is competitive with the benchmark, and it admits additional flexibility via hyperparameters that are natural to the approach. Tuning these parameters yields superior performance over the benchmark. We train the improved NCFA model on Spotify user behavior data. It is competitive with the standard VAE on data reconstruction with the benefit of causal interpretability and model identifiability. We use the learned latent space representation to characterize clusters of Spotify users. Additionally, we train an NCFA model on data from a randomized control trial and observe treatment effects in the latent space. / En vanlig uppgift för en data scientist på ett internetbolag är att utveckla metriker som reflekterar olika aspekter av användarupplevelsen. I denna uppsats är vi intresserade av system av mätvariabler utan direkta kausala relationer, så till vida att kovarians förklaras av latenta gemensamma orsaker. Ett ramverk för att modellera den datagenererande processen ges av Neuro-Causal Factor Analysis (NCFA). Den grafiska modellen består av en riktad graf med kanter som pekar från de latenta orsaksvariablerna till mätvariablerna; funktionssambanden uppskattas med en begränsad Variational Auto-Encoder (VAE). Vi förbättrar uppskattningen av den grafiska modellen genom att utveckla en MCMC algoritm över Bayesianska nätverk från vilka vi läser de obetingade beroendesambanden mellan mätvariablerna. Till skillnad från traditionella oberoendetest så garanterar denna metod en identifierbar grafisk modell. Vår algoritm är konkurrenskraftig jämfört med referensmetoderna, och den tillåter ytterligare flexibilitet via hyperparametrar som är naturliga för metoden. Optimal justering av dessa hyperparametrar resulterar i att vår metod överträffar referensmetoderna. Vi tränar den förbättrade NCFA modellen på data om användarbeteende på Spotify. Modellen är konkurrenskraftig jämfört med en standard VAE vad gäller rekonstruktion av data, och den tillåter dessutom kausal tolkning och identifierbarhet. Vi analyserar representationen av Spotify-användarna i termer av de latenta orsaksvariablerna. Specifikt så karakteriserar vi grupper av liknande användare samt observerar utfall av en randomiserad kontrollerad studie.
153

Recherche de caractéristiques sonores et de correspondances audiovisuelles pour des systèmes bio-inspirés de substitution sensorielle de l'audition vers la vision / Investigation of audio feature extraction and audiovisual correspondences for bio-inspired auditory to visual substitution systems

Adeli, Mohammad January 2016 (has links)
Résumé: Les systèmes de substitution sensorielle convertissent des stimuli d’une modalité sensorielle en des stimuli d’une autre modalité. Ils peuvent fournir les moyens pour les personnes handicapées de percevoir des stimuli d’une modalité défectueuse par une autre modalité. Le but de ce projet de recherche était d’étudier des systèmes de substitution de l’audition vers la vision. Ce type de substitution n’est pas bien étudié probablement en raison de la complexité du système auditif et des difficultés résultant de la désadaptation entre les sons audibles qui peuvent changer avec des fréquences allant jusqu’à 20000 Hz et des stimuli visuels qui changent très lentement avec le temps afin d’être perçus. Deux problèmes spécifiques des systèmes de substitution de l’audition vers la vision ont été ciblés par cette étude: la recherche de correspondances audiovisuelles et l’extraction de caractéristiques auditives. Une expérience audiovisuelle a été réalisée en ligne pour trouver les associations entre les caractéristiques auditives (la fréquence fondamentale et le timbre) et visuelles (la forme, la couleur, et la position verticale). Une forte corrélation entre le timbre des sons utilisés et des formes visuelles a été observée. Les sujets ont fortement associé des timbres “doux” avec des formes arrondies bleues, vertes ou gris clair, des timbres “durs” avec des formes angulaires pointues rouges, jaunes ou gris foncé et des timbres comportant simultanément des éléments de douceur et de dureté avec un mélange des deux formes visuelles arrondies et angulaires. La fréquence fondamentale n’a pas été associée à la position verticale, ni le niveau de gris ou la couleur. Étant donné la correspondance entre le timbre et une forme visuelle, dans l’étape sui- vante, un modèle hiérarchique flexible et polyvalent bio-inspiré pour analyser le timbre et extraire des caractéristiques importantes du timbre a été développé. Inspiré par les découvertes dans les domaines des neurosciences, neurosciences computationnelles et de la psychoacoustique, non seulement le modèle extrait-il des caractéristiques spectrales et temporelles d’un signal, mais il analyse également les modulations d’amplitude sur différentes échelles de temps. Il utilise un banc de filtres cochléaires pour résoudre les composantes spectrales d’un son, l’inhibition latérale pour améliorer la résolution spectrale, et un autre banc de filtres de modulation pour extraire l’enveloppe temporelle et la rugosité du son à partir des modulations d’amplitude. Afin de démontrer son potentiel pour la représentation du timbre, le modèle a été évalué avec succès pour trois applications : 1) la comparaison avec les valeurs subjectives de la rugosité 2) la classification d’instruments de musique 3) la sélection de caractéristiques pour les sons qui ont été regroupés en fonction de la forme visuelle qui leur avait été attribuée dans l’expérience audiovisuelle. La correspondance entre le timbre et la forme visuelle qui a été révélée par cette étude et le modèle proposé pour l’analyse de timbre peuvent être utilisés pour développer des systèmes de substitution de l’audition vers la vision intuitifs codant le timbre en formes visuelles. / Abstract: Sensory substitution systems encode a stimulus modality into another stimulus modality. They can provide the means for handicapped people to perceive stimuli of an impaired modality through another modality. The purpose of this study was to investigate auditory to visual substitution systems. This type of sensory substitution is not well-studied probably because of the complexities of the auditory system and the difficulties arising from the mismatch between audible sounds that can change with frequencies up to 20000 Hz and visual stimuli that should change very slowly with time to be perceived. Two specific problems of auditory to visual substitution systems were targeted in this research: the investigation of audiovisual correspondences and the extraction of auditory features. An audiovisual experiment was conducted online to find the associations between the auditory (pitch and timbre) and visual (shape, color, height) features. One hundred and nineteen subjects took part in the experiments. A strong association between timbre of envelope normalized sounds and visual shapes was observed. Subjects strongly associated soft timbres with blue, green or light gray rounded shapes, harsh timbres with red, yellow or dark gray sharp angular shapes and timbres having elements of softness and harshness together with a mixture of the previous two shapes. Fundamental frequency was not associated with height, grayscale or color. Given the correspondence between timbre and shapes, in the next step, a flexible and multipurpose bio-inspired hierarchical model for analyzing timbre and extracting the important timbral features was developed. Inspired by findings in the fields of neuroscience, computational neuroscience, and psychoacoustics, not only does the model extract spectral and temporal characteristics of a signal, but it also analyzes amplitude modulations on different timescales. It uses a cochlear filter bank to resolve the spectral components of a sound, lateral inhibition to enhance spectral resolution, and a modulation filter bank to extract the global temporal envelope and roughness of the sound from amplitude modulations. To demonstrate its potential for timbre representation, the model was successfully evaluated in three applications: 1) comparison with subjective values of roughness, 2) musical instrument classification, and 3) feature selection for labeled timbres. The correspondence between timbre and shapes revealed by this study and the proposed model for timbre analysis can be used to develop intuitive auditory to visual substitution systems that encode timbre into visual shapes.
154

在Spark大數據平台上分析DBpedia開放式資料:以電影票房預測為例 / Analyzing DBpedia Linked Open Data (LOD) on Spark:Movie Box Office Prediction as an Example

劉文友, Liu, Wen Yu Unknown Date (has links)
近年來鏈結開放式資料 (Linked Open Data,簡稱LOD) 被認定含有大量潛在價值。如何蒐集與整合多元化的LOD並提供給資料分析人員進行資料的萃取與分析,已成為當前研究的重要挑戰。LOD資料是RDF (Resource Description Framework) 的資料格式。我們可以利用SPARQL來查詢RDF資料,但是目前對於大量RDF的資料除了缺少一個高性能且易擴展的儲存和查詢分析整合性系統之外,對於RDF大數據資料分析流程的研究也不夠完備。本研究以預測電影票房為例,使用DBpedia LOD資料集並連結外部電影資料庫 (例如:IMDb),並在Spark大數據平台上進行巨量圖形的分析。首先利用簡單貝氏分類與貝氏網路兩種演算法進行電影票房預測模型實例的建構,並使用貝氏訊息準則 (Bayesian Information Criterion,簡稱BIC) 找到最佳的貝氏網路結構。接著計算多元分類的ROC曲線與AUC值來評估本案例預測模型的準確率。 / Recent years, Linked Open Data (LOD) has been identified as containing large amount of potential value. How to collect and integrate multiple LOD contents for effective analytics has become a research challenge. LOD is represented as a Resource Description Framework (RDF) format, which can be queried through SPARQL language. But large amount of RDF data is lack of a high performance and scalable storage analysis system. Moreover, big RDF data analytics pipeline is far from perfect. The purpose of this study is to exploit the above research issue. A movie box office sale prediction scenario is demonstrated by using DBpedia with external IMDb movie database. We perform the DBpedia big graph analytics on the Apache Spark platform. The movie box office prediction for optimal model selection is first evaluated by BIC. Then, Naïve Bayes and Bayesian Network optimal model’s ROC and AUC values are obtained to justify our approach.
155

Développement d'une méthodologie d'assistance au commissionnement des bâtiments à faible consommation d'énergie / Development of a methodology to assist the commissioning of low energy buildings

Hannachi-Belkadi, Nazila Kahina 08 July 2008 (has links)
Les bâtiments à faible consommation d’énergie connaissent, ces dernières années, un grand intérêt étant donné le rôle important qu’ils jouent dans la diminution des émissions de gaz à effet de serre d’une part, et la flambée des prix des combustibles, d’autre part. Néanmoins, dans de nombreux cas la réalisation de ce type de bâtiments n’atteint pas les performances escomptées. Ce problème est dû en grande partie à : 1) la perte d’informations tout au long du cycle de vie du bâtiment, 2) la non évaluation régulière des décisions prises par les acteurs intervenants. Le commissionnement en tant que processus de contrôle qualité joue un rôle important dans le bon déroulement du processus de réalisation de bâtiments à faible consommation d’énergie. Cette recherche vise à développer une méthodologie dont l’objectif est d’assister les personnes responsables de cette mission dans la définition de plans de commissionnement adaptés à leurs projets. Nous avons réalisé en premier, un état de l’art de la réalisation des bâtiments à faible consommation d’énergie, que nous avons par la suite confronté à la réalité à travers une enquête auprès des acteurs du bâtiment et d’étude de cas réels. Cette étape nous a permis de formuler une hypothèse concernant la nécessité d’un commissionnement «évolutif» -adapté aux particularités de chaque projet - et de décrire une méthodologie globale d’assistance à la conception des bâtiments à faible consommation d’énergie, qui intègre une aide à la décision, une gestion de l’information et un commissionnement «évolutif» qui vient vérifier le bon déroulement des deux premiers. Pour mettre en application cette méthodologie, une boîte à outils a été développée. Elle est constituée de : 1) un outil dit «statique» qui permet de définir un premier plan de commissionnent générique qui répond aux caractéristiques d’un projet, à partir d’une base de données exhaustives de tâches de commissionnement, 2) un outil dit «dynamique» basé sur les probabilités, qui permet de mettre à jour le plan de commissionnement initial (générique), en l’adaptant au projet en cours. Cette mise à jour permet de prendre en compte les particularités et imprévus rencontrés lors de la réalisation d’un projet, rendant ainsi le plan de commissionnement plus précis. Une expérimentation, dans un cas réel, du premier outil et des applications du second ont été faites pour montrer leurs possibilités et leurs limites. Les résultats font apparaître deux points importants : 1) l’intérêt d’avoir un plan de commissionnement structuré et évolutif pour vérifier la qualité de la réalisation des bâtiments à faible consommation d’énergie et assurer ainsi l’atteinte des performances souhaitées, 2) l’intérêt d’utiliser un outil probabiliste tel que les réseaux Bayésiens pour anticiper les dérives et prendre en charge les imprévus rencontrés lors de ce processus vivant. Cette méthodologie représente une base pour le développement d’outils d’assistance à la définition de plans de commissionnement «évolutifs» pour le neuf et l’existant, et tous les secteurs du bâtiment / The low energy buildings know, these latest years, a great interest because of the important role that they play in reducing the greenhouse gas emissions on one hand, and the rise of the combustibles prices, on the other hand. Nevertheless, in many cases, low energy consumption buildings do not achieve the expected performances. This problem is largely due to: 1) the loss of information throughout the building life cycle, 2) the non-regular evaluation of the decisions made by the actors intervening in their conception. The commissioning, as a quality control process, plays an important role in the good progress of the realization process of low energy buildings case. This research aims to develop a methodology to assist the persons responsible of the commissioning in the definition of commissioning plans adapted to their projects. Firstly, we performed a state of art of the low energy consumption buildings realisation that we then confronted, to the reality through an investigation with building actors and a study of real cases. This step allowed us to formulate a hypothesis concerning the necessity of a "progressive" commissioning -adapted to project particularities - and to describe a global methodology of assistance to the low energy consumption buildings realisation that integrates a decision making aid, an information management and a "progressive" commissioning that verify the good realisation of the two first functions. To put this methodology into application, a toolbox was developed. It comprises: 1) a tool named "static" that allows defining a first generic commission plan that satisfies the project characteristics, based on an exhaustive data of commissioning tasks, 2) a tool named "dynamic" based on the probabilities, that allows updating the initial (generic) commissioning plan. This update manages the unexpected events met during the project realization; witch permits the commissioning plan to be more precise. An experimentation of the first tool was done in a real case and applications of the second were done to show their possibilities and their limits. The results show two important points: 1) the interest of having a structured and progressive commissioning plan to verify the quality of low energy consumption buildings realisation and guarantee the achievement of the expected performances, 2) the interest of using a probabilistic tool such as the Bayésien networks to anticipate the drifts and to deal with the unexpected events met throughout the building life cycle. This methodology represents a basis for the development of assistance tools for the definition of commissioning "progressive" plans for the new and the existing, and all the building sectors
156

建構台灣銀行業預警系統-貝氏網路模型之運用 / Bayesian model for bank failure risk in Taiwan

黃薰儀, Huang, Hsun Yi Unknown Date (has links)
國際研究中雖有針對國家級的銀行脆弱性作分析,卻並未定義或預測台灣系統性危機,本研究在這樣的背景下,決定建構台灣本土的銀行業預警系統,建立銀行危機的領先指標,希望不只順應國際潮流,更能發展適合台灣特殊性的模型。本研究利用貝氏網路模型的特殊性: (1)事後值(2)機率特性,以個體化資料著手,建構一總體性模型。故研究者能確切了解個別銀行財務狀況,對個別銀行發出預警。事後值的特性使研究者能同時考慮多項財務比率。另外,利用機率特性,可幫助研究者了解危機的程度,且能做總體的延伸運用。 本研究發展出兩種方法建構總體模型。第一種為百分比法,以危機銀行佔總銀行個數的比率為基礎;第二種為加權平均法,讓機率值高者有較大權數,機率小者有較小權數去建立一加權平均機率值。 將本研究的推論結果和「台灣金融服務業聯合總會委託計畫-台灣金融危機領先指標之研究」比較,顯示本模型的兩種方法皆與危機之發生有相同趨勢,而考慮危機訊號的設定後,方法二加權平均法顯然具備較佳的預測結果。此外相較總體面衝擊產生的危機,本模型在預測能力上,對來自銀行個體面造成的危機預測明顯較優異。 / International organizations defined and predicted country bank crises events without Taiwan, but they happened in Taiwan in the past twenty years. We construct the early warning system for banking crises in Taiwan and develop the specific model suited to our country. Using Bayesian Model’s specialities: (1) posterior value; (2) probability, we build a systematic model based on microeconomic data. So researcher can understand all financial conditions and predict the financial distresses of individual banks. The concept of posteriority lets researchers can consider a lot of financial ratio at the same time. The characteristic of probability makes researcher to extend the model to macroeconomic. We develop two methods to build systematic model. One is Percentage method which is based on the percentage of financial distress banks to all banks. The other one is weighted average method which used large weight in financial distress bank and small weight in financial sound banks. Comparing our results with the report that Taiwan Financial Services Roundtable issued in 2009, our methods have distress trends which link with crisis directly. But weighted average method has a better predict power than percentage method after considering the signals of distress we specify. Besides, our model has a stronger predictive power in crises from individual effect than crises from macroeconomic shocks.
157

Bayesian belief networks for dementia diagnosis and other applications : a comparison of hand-crafting and construction using a novel data driven technique

Oteniya, Lloyd January 2008 (has links)
The Bayesian network (BN) formalism is a powerful representation for encoding domains characterised by uncertainty. However, before it can be used it must first be constructed, which is a major challenge for any real-life problem. There are two broad approaches, namely the hand-crafted approach, which relies on a human expert, and the data-driven approach, which relies on data. The former approach is useful, however issues such as human bias can introduce errors into the model. We have conducted a literature review of the expert-driven approach, and we have cherry-picked a number of common methods, and engineered a framework to assist non-BN experts with expert-driven construction of BNs. The latter construction approach uses algorithms to construct the model from a data set. However, construction from data is provably NP-hard. To solve this problem, approximate, heuristic algorithms have been proposed; in particular, algorithms that assume an order between the nodes, therefore reducing the search space. However, traditionally, this approach relies on an expert providing the order among the variables --- an expert may not always be available, or may be unable to provide the order. Nevertheless, if a good order is available, these order-based algorithms have demonstrated good performance. More recent approaches attempt to ''learn'' a good order then use the order-based algorithm to discover the structure. To eliminate the need for order information during construction, we propose a search in the entire space of Bayesian network structures --- we present a novel approach for carrying out this task, and we demonstrate its performance against existing algorithms that search in the entire space and the space of orders. Finally, we employ the hand-crafting framework to construct models for the task of diagnosis in a ''real-life'' medical domain, dementia diagnosis. We collect real dementia data from clinical practice, and we apply the data-driven algorithms developed to assess the concordance between the reference models developed by hand and the models derived from real clinical data.
158

A Bayesian Network methodology for railway risk, safety and decision support

Mahboob, Qamar 24 March 2014 (has links) (PDF)
For railways, risk analysis is carried out to identify hazardous situations and their consequences. Until recently, classical methods such as Fault Tree Analysis (FTA) and Event Tree Analysis (ETA) were applied in modelling the linear and logically deterministic aspects of railway risks, safety and reliability. However, it has been proven that modern railway systems are rather complex, involving multi-dependencies between system variables and uncertainties about these dependencies. For train derailment accidents, for instance, high train speed is a common cause of failure; slip and failure of brake applications are disjoint events; failure dependency exists between the train protection and warning system and driver errors; driver errors are time dependent and there is functional uncertainty in derailment conditions. Failing to incorporate these aspects of a complex system leads to wrong estimations of the risks and safety, and, consequently, to wrong management decisions. Furthermore, a complex railway system integrates various technologies and is operated in an environment where the behaviour and failure modes of the system are difficult to model using probabilistic techniques. Modelling and quantification of the railway risk and safety problems that involve dependencies and uncertainties such as mentioned above are complex tasks. Importance measures are useful in the ranking of components, which are significant with respect to the risk, safety and reliability of a railway system. The computation of importance measures using FTA has limitation for complex railways. ALARP (As Low as Reasonably Possible) risk acceptance criteria are widely accepted as ’\'best practice’’ in the railways. According to the ALARP approach, a tolerable region exists between the regions of intolerable and negligible risks. In the tolerable region, risk is undertaken only if a benefit is desired. In this case, one needs to have additional criteria to identify the socio-economic benefits of adopting a safety measure for railway facilities. The Life Quality Index (LQI) is a rational way of establishing a relation between the financial resources utilized to improve the safety of an engineering system and the potential fatalities that can be avoided by safety improvement. This thesis shows the application of the LQI approach to quantifying the social benefits of a number of safety management plans for a railway facility. We apply Bayesian Networks and influence diagrams, which are extensions of Bayesian Networks, to model and assess the life safety risks associated with railways. Bayesian Networks are directed acyclic probabilistic graphical models that handle the joint distribution of random variables in a compact and flexible way. In influence diagrams, problems of probabilistic inference and decision making – based on utility functions – can be combined and optimized, especially, for systems with many dependencies and uncertainties. The optimal decision, which maximizes the total benefits to society, is obtained. In this thesis, the application of Bayesian Networks to the railway industry is investigated for the purpose of improving modelling and the analysis of risk, safety and reliability in railways. One example application and two real world applications are presented to show the usefulness and suitability of the Bayesian Networks for the quantitative risk assessment and risk-based decision support in reference to railways.
159

以情境與行為意向分析為基礎之持續性概念重構個人化影像標籤系統 / Continuous Reconceptualization of Personalized Photograph Tagging System Based on Contextuality and Intention

李俊輝 Unknown Date (has links)
生活於數位時代,巨量的個人生命記憶使得人們難以輕易解讀,必須經過檢索或標籤化才可以進一步瞭解背後的意涵。本研究著力個人記憶裡繁瑣及週期性的廣泛事件,進行於「情節記憶語意化」以及「何以權衡大眾與個人資訊」兩議題之探討。透過生命記憶平台裡影像標籤自動化功能,我們以時空資訊為索引提出持續性概念重構模型,整合共同知識、個人近況以及個人偏好三項因素,模擬人們對每張照片下標籤時的認知歷程,改善其廣泛事件上註釋困難。在實驗設計上,實作大眾資訊模型、個人資訊模型以及本研究持續性概念重構模型,並招收九位受試者來剖析其認知歷程以及註釋效率。實驗結果顯示持續性概念重構模型解決了上述大眾與個人兩模型上的極限,即舊地重遊、季節性活動、非延續性活動性質以及資訊邊界註釋上的問題,因此本研究達成其個人生命記憶在廣泛事件之語意標籤自動化示範。 / In the digital era, labeling and retrieving are ways to understand the meaning behind a huge amount of lifetime archive. Foucusing on tedious and periodic general events, this study will discuss two issues: (1) the semantics of episodic memory (2) the trade-off between common and personal knowledge. Using the automatic image-tagging technique of lifelong digital archiving system, we propose the Coutinuous Reconceptualization Model which models the cognitive processing of examplar categorization based on temporal-spatial information. Integrating the common knowlegde, current personal life and hobby, the Continuous Reconceptualization Model improves the tagging efficiency. In this experiment, we compare the accuracy of cognitive modeling and tagging efficiency of the three distinct models: the common knowledge model, personal knowledge model and Coutinuous Reconceptualization Model. Nine participants were recruited to label the photos. The results show that the Continous Reconceptualization Model overcomes the limitations inherent in other models, including the auto-tagging problems of modeling certain situations, such as re-visiting places, seasonal activities, noncontinuous activities and information boundary. Consequently, the Continuous Reconceptualization Model demonstrated the efficiency of the automatic image-tagging technique used in the semantic labeling of the general event of personal memory.
160

Gestion dynamique des connaissances de maintenance pour des environnements de production de haute technologie à fort mix produit / Dynamic management of maintenance knowledge for high technology production environments with high product mix

Ben Said, Anis 18 May 2016 (has links)
Le progrès constant des technologies électroniques, la courte durée de vie commerciale des produits, et la diversité croissante de la demande client font de l’industrie du semi-conducteur un environnement de production contraint par le changement continu des mix produits et des technologies. Dans un tel environnement, le succès dépend de la capacité à concevoir et à industrialiser de nouveaux produits rapidement tout en gardant un bon niveau de critères de coût, rendement et temps de cycle. Une haute disponibilité des capacités de production est assurée par des politiques de maintenance appropriées en termes de diagnostic, de supervision, de planification et des protocoles opératoires. Au démarrage de cette étude, l’approche AMDEC (analyse des modes de défaillance, leurs effets et de leur criticité) était seule mobilisée pour héberger les connaissances et le savoir-faire des experts. Néanmoins, la nature évolutive du contexte industriel requiert la mise à jour à des fréquences appropriées de ces connaissances pour adapter les procédures opérationnelles aux changements de comportements des équipements et des procédés. Cette thèse entend montrer que la mise à jour des connaissances peut être organisée en mettant en place une méthodologie opérationnelle basée sur les réseaux bayésiens et la méthode AMDEC. Dans cette approche, les connaissances et les savoir-faire existants sont tout d’abord capitalisés en termes des liens de cause à effet à l’aide de la méthode d’AMDEC pour prioriser les actions de maintenance et prévenir leurs conséquences sur l’équipement, le produit et la sécurité des personnels. Ces connaissances et savoir-faire sont ensuite utilisés pour concevoir des procédures opérationnelles standardisées permettant le partage des savoirs et savoir-faire des experts. Les liens causaux stockés dans l’AMDEC sont modélisés dans un réseau bayésien opérationnel (O-BN), afin de permettre l’évaluation d’efficacité des actions de maintenance et, par là même, la pertinence des connaissances existantes capitalisées. Dans un contexte incertain et très variable, l’exécution appropriée des procédures est mesurée à l’aide des indicateurs standards de performance de maintenance (MPM) et la précision des connaissances existantes en évaluant la précision de l’O-BN. Toute dérive de ces critères conduit à l'apprentissage d'un nouveau réseau bayésien non-supervisé (U-BN) pour découvrir de nouvelles relations causales à partir de données historiques. La différence structurelle entre O-BN et U-BN met en évidence de nouvelles connaissances potentielles qui sont validées par les experts afin de modifier l’AMDEC existante ainsi que les procédures de maintenance associées. La méthodologie proposée a été testée dans un des ateliers de production contraint par un haut mix de produits pour démontrer sa capacité à renouveler dynamiquement les connaissances d’experts et d'améliorer l'efficacité des actions de maintenance. Cette expérimentation a conduit à une diminution de 30% des reprises d’opérations de maintenance attestant une meilleure qualité des connaissances modélisées dans les outils fournis par cette thèse. / The constant progress in electronic technology, the short commercial life of products, and the increasing diversity of customer demand are making the semiconductor industry a production environment constrained by the continuous change of product mix and technologies. In such environment, success depends on the ability to develop and industrialize new products in required competitive time while keeping a good level of cost, yield and cycle time criteria. These criteria can be ensured by high and sustainable availability of production capacity which needs appropriate maintenance policies in terms of diagnosis, supervision, planning and operating protocols. At the start of this study, the FMEA approach (analysis of failure modes, effects and criticality) was only mobilized to capitalize the expert’s knowledge for maintenance policies management. However, the evolving nature of the industrial context requires knowledge updating at appropriate frequencies in order to adapt the operational procedures to equipment and processes behavior changes.This thesis aims to show that the knowledge update can be organized by setting up an operational methodology combine both Bayesian networks and FMEA method. In this approach, existing knowledge and know-how skills are initially capitalized in terms of cause to effect links using the FMEA method in order to prioritize maintenance actions and prevent their consequences on the equipment, the product quality and personal safety. This knowledge and expertise are then used to develop unified operating procedures for expert’s knowledge and know-how sharing. The causal links stored in the FMEA are modeled in an operational Bayesian network (BN-O), in order to enable the assessment of maintenance actions effectiveness and, hence, the relevance of existing capitalized knowledge. In an uncertain and highly variable environment, the proper execution of procedures is measured using standards maintenance performance measurement indicators (MPM). Otherwise, the accuracy of existing knowledge can be assessed as a function of the O-BN model accuracy. Any drift of these criteria leads to learning a new unsupervised Bayesian network (U-BN) to discover new causal relations from historical data. The structural difference between O-BN (built using experts judgments) and U-BN (learned from data) highlights potential new knowledge that need to be analyzed and validated by experts to modify the existing FMEA and update associated maintenance procedures.The proposed methodology has been tested in a production workshop constrained by high product mix to demonstrate its ability to dynamically renew expert knowledge and improve the efficiency of maintenance actions. This experiment led to 30% decrease in failure occurrence due to inappropriate maintenance actions. This is certifying a better quality of knowledge modeled in the tools provided by this thesis.

Page generated in 0.5683 seconds