• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 26
  • 13
  • 11
  • 4
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 64
  • 64
  • 20
  • 16
  • 15
  • 14
  • 13
  • 10
  • 9
  • 8
  • 8
  • 8
  • 7
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

合作式閱讀標註之知識萃取機制研究 / A study on developing knowledge extraction mechanisms from cooperative reading annotation

陳勇汀, Chen, YungTing Unknown Date (has links)
本研究在合作式數位閱讀環境中發展了一套「知識標註學習系統」,可以支援多人同時針對一篇數位文本進行閱讀標註與互動討論,以提升讀者閱讀的深度與廣度。此外,本研究更進一步地以專家評估法設計「知識萃取機制」,用於判斷讀者閱讀標註的重要度。 「知識萃取機制」是基於讀者閱讀標註中所蘊含的閱讀理解策略與閱讀技巧,以及合作式閱讀社群中產生的標註共識,考量了「標註範圍長度」、「標註範圍詞性」、「標註範圍位置」、「標註策略類型」、「標註範圍共識」與「標註喜愛共識」等六項因素,以專家評估法制定的標註重要度模糊隸屬函數來評定各因素的重要度並量化為「標註因素分數」指標,最後將六項因素以模糊綜合評判進行推論,再將推論結果解模糊化而成為代表標註重要度的量化指標「標註分數」。基於「知識萃取機制」所計算代表標註重要度的「標註分數」,可作為讀者進行閱讀標註是否不佳的判斷,並據此提供標註技巧建議與優質標註內容推薦的「標註建議」,以幫助讀者提昇閱讀理解能力。 為了驗證「知識萃取機制」計算「標註分數」的有效性,以及探討未來改善「知識萃取機制」和可加入的考量因素與適性化設計的可能方向,本研究以單組後測設計規劃實驗,並以國立政治大學圖書資訊數位碩士在職專班19位學生作為實驗對象,進行一份數位學習論文的合作式閱讀標註學習,並於實驗後評估實驗對象閱讀文章之後的閱讀理解能力,作為評鑑「知識萃取機制」計算方式是否有效的指標。最後再以問卷蒐集實驗對象對於「知識萃取機制」的意見,歸納成為未來研究改善的參考依據。 研究結果發現,本研究所提出「知識萃取機制」中計算標註重要度的「標註分數」與實驗對象的閱讀理解能力呈現低度正相關,一定程度地證實了「知識萃取機制」計算方式的有效性。而「知識萃取機制」六項考量因素中,「標註範圍長度」與「標註喜愛共識」為分辨實驗對象閱讀理解能力的關鍵因素;「標註策略類型」與「標註範圍詞性」的標註重要度模糊隸屬函數有待修正;「標註範圍共識」與「標註範圍位置」為無效因素,但這可能是受到計算方式錯誤與閱讀文章類型的影響,未來仍有待進一步評估。在未來發展方面,系統操作標註行為頻率越高,實驗對象的閱讀理解能力也有較高的跡象,未來可以將其納入「知識萃取機制」作為考量因素之一;而閱讀理解能力較差的實驗對象,呈現出比較不願意回應「標註建議」與較常使用社群互動的現象。本研究歸納可能原因為實驗對象自身的閱讀素養不成熟,以至於無法判斷「標註建議」的正確性,而需要參考他人閱讀標註。 未來研究可針對本研究的實驗對象與閱讀標註資料進行更深入的分析,並且將改良後的「知識萃取機制」擴大至探討其他類型的數位文本閱讀標註與實驗對象。也可以搭配認知策略教學法建構閱讀教學鷹架,或是將「知識標註學習系統」用於支援數位典藏與數位圖書館閱讀學習,以激發更多不同領域的應用研究。 / Based on the concept of cooperative reading learning, the study presented a cooperative reading annotation system termed as "Knowledge-based Annotation Learning System (KALS)", which can support cooperative reading annotation while reading a common text-based digital material, to accumulate reading knowledge and to promote readers’ reading comprehension abilities. Through KALS, readers could freely increase annotation for any text words on a text-based digital material with HTML format. Readers can also share and discuss the contributed annotation with other readers via interaction interface in KALS. Furthermore, this study also developed an intelligent Knowledge Extraction Mechanism (KEM), which can mine the quality annotation knowledge and annotation skills based on a large amount of readers’ annotation archived on KALS, to further promote reading comprehension of readers via on-line recommending high quality annotation knowledge and good annotation skills to readers. KEM employed fuzzy synthetic decision approach to quantify each reader’s annotation as a numeric index termed as "Annotation Score" under simultaneously considering two annotation consensuses including anchor consensus and favorite consensus, and four annotation features including anchor length, part of speech of anchor word, anchor location and annotation strategy. In a manner, "Annotation Score" can represent the importance of reader's annotation. Thus, KEN uses "Annotation Score" to determine which annotation needs the suggestion of annotation skill tips, and which high-quality annotation can be recommended to readers. At the same time, readers are encouraged to reflect their annotation behavior based on the suggestion of annotation skill tips and high-quality annotation recommended by KEN, and are asked to respond the feedback from KEM. To evaluate the effectiveness of the proposed KALS with KEM, the study designed an experiment to collect readers' annotation behavior after readers read an assigned text-based digital material, and then assessed readers’ reading comprehension ability. Reading comprehension ability was used to verify the effectiveness of "Annotation Score" inferred by KEM and to explore the potential factors that can improve KEM. In the designed experiment, participants were 19 graduate students of E-learning Master Program of Library and Information Studies of National Chengchi University who took the course of Integrating Information Technology into Teaching. All participants were asked to read an academic paper related E-Learning issue based on the support of KALS with KEM during two weeks. Moreover, they had to finish a reading report and accept a test of reading comprehension after finishing reading learning activity. The report and test were served as the measurement of participants' reading comprehension. The experimental results show that there is a low positive correlation between "Annotation Score" and participants' reading comprehension score, thus confirming the effectiveness of the proposed KEM. Furthermore, KEM could be improved by adjusting the annotation importance calculation approach of part of speech anchor word and annotation strategy. This study also confirmed that the considered factors of KEM should eliminate two factors including anchor consensus and anchor location. Additionally, future study should consider adopting frequency of annotation behavior as considered factors of KEM. Moreover, the experimental results also show that participants with low level of reading comprehension ability have higher need of community interaction than participants with high level of reading comprehension ability while using KALS for reading learning, and they are difficult to confirm whether the recommending tips of annotation from KEM is correct or not. Obviously, exploring the difference of participants’ annotation behavior between different levels of reading comprehension abilities provides benefits to develop adaptive functionalities of KEM in the future.
62

Predikce hodnot v čase / Prediction of Values on a Time Line

Maršová, Eliška January 2016 (has links)
This work deals with the prediction of numerical series whose application is suitable for prediction of stock prices. They explain the procedures for analysis and works with price charts. Also explains the methods of machine learning. Knowledge is used to build a program that finds patterns in numerical series for estimation.
63

Approches intelligentes pour le pilotage adaptatif des systèmes en flux tirés dans le contexte de l'industrie 4.0 / Intelligent approaches for handling adaptive pull control systems in the context of industry 4.0

Azouz, Nesrine 28 June 2019 (has links)
De nos jours, de nombreux systèmes de production sont gérés en flux « tirés » et utilisent des méthodes basées sur des « cartes », comme : Kanban, ConWIP, COBACABANA, etc. Malgré leur simplicité et leur efficacité, ces méthodes ne sont pas adaptées lorsque la production n’est pas stable et que la demande du client varie. Dans de tels cas, les systèmes de production doivent donc adapter la tension de leur flux tout au long du processus de fabrication. Pour ce faire, il faut déterminer comment ajuster dynamiquement le nombre de cartes (ou de ‘e-card’) en fonction du contexte. Malheureusement, ces décisions sont complexes et difficiles à prendre en temps réel. De plus, dans certains cas, changer trop souvent le nombre de cartes kanban peut perturber la production et engendrer un problème de nervosité. Les opportunités offertes par l’industrie 4.0 peuvent être exploitées pour définir des stratégies intelligentes de pilotage de flux permettant d’adapter dynamiquement ce nombre de cartes kanban.Dans cette thèse, nous proposons, dans un premier temps, une approche adaptative basée sur la simulation et l'optimisation multi-objectif, capable de prendre en considération le problème de la nervosité et de décider de manière autonome (ou d'aider les gestionnaires)  quand et où ajouter ou retirer des cartes Kanban. Dans un deuxième temps, nous proposons une nouvelle approche adaptative et intelligente basée sur un réseau de neurones dont l’apprentissage est d’abord réalisé hors ligne à l’aide d’un modèle numérique jumeau (simulation), exploité par une optimisation multi-objectif. Après l’apprentissage, le réseau de neurones permet de décider en temps réel, quand et à quelle étape de fabrication il est pertinent de changer le nombre de cartes kanban. Des comparaisons faites avec les meilleures méthodes publiées dans la littérature montrent de meilleurs résultats avec des changements moins fréquents. / Today, many production systems are managed in "pull" control system and used "card-based" methods such as: Kanban, ConWIP, COBACABANA, etc. Despite their simplicity and efficiency, these methods are not suitable when production is not stable and customer demand varies. In such cases, the production systems must therefore adapt the “tightness” of their production flow throughout the manufacturing process. To do this, we must determine how to dynamically adjust the number of cards (or e-card) depending on the context. Unfortunately, these decisions are complex and difficult to make in real time. In addition, in some cases, changing too often the number of kanban cards can disrupt production and cause a nervousness problem. The opportunities offered by Industry 4.0 can be exploited to define smart flow control strategies to dynamically adapt this number of kanban cards.In this thesis, we propose, firstly, an adaptive approach based on simulation and multi-objective optimization technique, able to take into account the problem of nervousness and to decide autonomously (or to help managers) when and where adding or removing Kanban cards. Then, we propose a new adaptive and intelligent approach based on a neural network whose learning is first realized offline using a twin digital model (simulation) and exploited by a multi-objective optimization method. Then, the neural network could be able to decide in real time, when and at which manufacturing stage it is relevant to change the number of kanban cards. Comparisons made with the best methods published in the literature show better results with less frequent changes.
64

Multimodal Data Management in Open-world Environment

K M A Solaiman (16678431) 02 August 2023 (has links)
<p>The availability of abundant multimodal data, including textual, visual, and sensor-based information, holds the potential to improve decision-making in diverse domains. Extracting data-driven decision-making information from heterogeneous and changing datasets in real-world data-centric applications requires achieving complementary functionalities of multimodal data integration, knowledge extraction and mining, situationally-aware data recommendation to different users, and uncertainty management in the open-world setting. To achieve a system that encompasses all of these functionalities, several challenges need to be effectively addressed: (1) How to represent and analyze heterogeneous source contents and application context for multimodal data recommendation? (2) How to predict and fulfill current and future needs as new information streams in without user intervention? (3) How to integrate disconnected data sources and learn relevant information to specific mission needs? (4) How to scale from processing petabytes of data to exabytes? (5) How to deal with uncertainties in open-world that stem from changes in data sources and user requirements?</p> <p><br></p> <p>This dissertation tackles these challenges by proposing novel frameworks, learning-based data integration and retrieval models, and algorithms to empower decision-makers to extract valuable insights from diverse multimodal data sources. The contributions of this dissertation can be summarized as follows: (1) We developed SKOD, a novel multimodal knowledge querying framework that overcomes the data representation, scalability, and data completeness issues while utilizing streaming brokers and RDBMS capabilities with entity-centric semantic features as an effective representation of content and context. Additionally, as part of the framework, a novel text attribute recognition model called HART was developed, which leveraged language models and syntactic properties of large unstructured texts. (2) In the SKOD framework, we incrementally proposed three different approaches for data integration of the disconnected sources from their semantic features to build a common knowledge base with the user information need: (i) EARS: A mediator approach using schema mapping of the semantic features and SQL joins was proposed to address scalability challenges in data integration; (ii) FemmIR: A data integration approach for more susceptible and flexible applications, that utilizes neural network-based graph matching techniques to learn coordinated graph representations of the data. It introduces a novel graph creation approach from the features and a novel similarity metric among data sources; (iii) WeSJem: This approach allows zero-shot similarity matching and data discovery by using contrastive learning<br> to embed data samples and query examples in a high-dimensional space using features as a novel source of supervision instead of relevance labels. (3) Finally, to manage uncertainties in multimodal data management for open-world environments, we characterized novelties in multimodal information retrieval based on data drift. Moreover, we proposed a novelty detection and adaptation technique as an augmentation to WeSJem.<br> </p> <p>The effectiveness of the proposed frameworks, models, and algorithms was demonstrated<br> through real-world system prototypes that solved open problems requiring large-scale human<br> endeavors and computational resources. Specifically, these prototypes assisted law enforcement officers in automating investigations and finding missing persons.<br> </p>

Page generated in 0.0853 seconds