Spelling suggestions: "subject:"ddf"" "subject:"fdf""
291 |
Atomic wear mechanisms of hard chrome against Al2O3 / Atomistisk nötnings mekanism av hård krom mot Al2O3Fierro Tobar, Raul, Yuku, Marius January 2021 (has links)
Hard chrome exhibit hardness of about 70 HRC and lubricity that prevents seizing and galling and is therefore the common first choice for engineers to reduce friction and minimize wear. These properties enable engineering applications such as cutting and drilling, especially in manufacturing, production and consumer good industries. Hard chrome has a wide set of functions as being decorative, corrosion resistant and ease cleaning procedures. Hence, electroplating is a common process to synthesize hard chrome butthis process is banned by EU due to the rise of hazardous components. However, the need for alternative material is at rise but, fundamental issues for hard chrome are yet to be solved. The purpose of the work is to develop atomic structures for two systems using different programs such as OpenMX, VESTA and Ovito. The goal is to identify atomic wear mechanisms of hard chrome in an ideal system (Al2O3- Cr) and a real system (Al2O3 - Cr2O3) using density functional theory (DFT). These two systems are analyzed since every surface oxidises in air (real system) and under increased mechanical loads the pristine surface of hard chrome (ideal system) can be exposed to the counter body (Al2O3). DFT based molecular dynamics simulations are carried out at a temperature of 300 K and a sliding speed of 10 ms−1. The simulation interval is 0-15000 fs and radial distribution function (RDF) is employed to analyse the atomic wear mechanisms. Both systems start to show adhesive wear due to amorphization, mixed with signs of abrasive wear on the atomic scale. The systems are further analyzed using electron density distribution (EDD), that plots electronic structures enhancing the analyse of different type of bondstaking place. The bulk structures mainly show covalent bonds with ionic and metallic bonds less represented. Furthermore, same observations have been made for the interfaces of the ideal and real system. / Hårdkrom uppvisar hårdhet på ungefär 70 HRC och en smörjförmåga som förhindrar nötning och är därför det vanliga första valet för ingenjörer att minska friktionen och minimera slitaget. Dessa egenskaper möjliggör tekniska tillämpningar, såsom skärning och borrning, särskilt inom tillverknings, produktions och konsumentvaruindustrin. Hårdkrom har ett brett användningsområde och flera egenskaper såsom att vara dekorativ, korrosionsbeständig och underlätta rengöringsprocedurer. Därav är galvanisering en vanlig process för att syntetisera hårdkrom, men denna process är förbjuden av EU på grund av utsläpp av farliga komponenter. Behovet av alternativt material är vid uppgång men, de grundläggande problemen för hård krom är ännu inte lösta. Syftet med arbetet är att ta fram atom strukturer för två system med hjälp av olika program, såsom OpenMX, VESTA och Ovito. Målet är att identifiera vilken typ av nötning som sker på hårdkrom i ett idealt system (Al2O3- Cr) och i ett verkligt system (Al2O3- Cr2O3) genom att använda täthetsfunktionalteorin (DFT). Dessa två system analyseras eftersom varje yta oxiderar i luften (verkligt system) och under ökade mekaniska belastningar kan den orörda ytan av hårtkrom (idealiskt system) exponeras för motkroppen (Al2O3). DFT simuleringar är skapade med en temperatur på 300 K och en glidningshastighet på 10 ms−1. Simulerings intervallet är från 0-15000 fs och med hjälp av radiell fördelningsfunktion (RDF) analyseras de atomiska nötnings mekanismerna. Båda systemen börjar visa adhesiv nötning på grund av amorfisering, samt ett tecken på abrasiv nötning på en atomisk skala. Systemen analyseras vidare med användning av elektrondensitetsfördelning (EDD) som plottar elektroniska strukturer vilket förbättrar analysen av olika typer av bindningar som äger rum. Bulkstrukturerna visar huvudsaklig en kovalent bindning med joniska och metalliska bindningar mindre representerade. Samma observationer har gjorts för gränssnitten mellan det ideala och verkliga systemet.
|
292 |
Automating Geospatial RDF Dataset Integration and EnrichmentSherif, Mohamed Ahmed Mohamed 12 May 2016 (has links)
Over the last years, the Linked Open Data (LOD) has evolved from a mere 12 to more than 10,000 knowledge bases. These knowledge bases come from diverse domains including (but not limited to) publications, life sciences, social networking, government, media, linguistics. Moreover, the LOD cloud also contains a large number of crossdomain knowledge bases such as DBpedia and Yago2. These knowledge bases are commonly managed in a decentralized fashion and contain partly verlapping information. This architectural choice has led to knowledge pertaining to the same domain being published by independent entities in the LOD cloud. For example, information on drugs can be found in Diseasome as well as DBpedia and Drugbank. Furthermore, certain knowledge bases such as DBLP have been published by several bodies, which in turn has lead to duplicated content in the LOD . In addition, large amounts of geo-spatial information have been made available with the growth of heterogeneous Web of Data.
The concurrent publication of knowledge bases containing related information promises to become a phenomenon of increasing importance with the growth of the number of independent data providers. Enabling the joint use of the knowledge bases published by these providers for tasks such as federated queries, cross-ontology question answering and data integration is most commonly tackled by creating links between the resources described within these knowledge bases. Within this thesis, we spur the transition from isolated knowledge bases to enriched Linked Data sets where information can be easily integrated and processed. To achieve this goal, we provide concepts, approaches and use cases that facilitate the integration and enrichment of information with other data types that are already present on the Linked Data Web with a focus on geo-spatial data.
The first challenge that motivates our work is the lack of measures that use the geographic data for linking geo-spatial knowledge bases. This is partly due to the geo-spatial resources being described by the means of vector geometry. In particular, discrepancies in granularity and error measurements across knowledge bases render the selection of appropriate distance measures for geo-spatial resources difficult. We address this challenge by evaluating existing literature for point set measures that can be used to measure the similarity of vector geometries. Then, we present and evaluate the ten measures that we derived from the literature on samples of three real knowledge bases.
The second challenge we address in this thesis is the lack of automatic Link Discovery (LD) approaches capable of dealing with geospatial knowledge bases with missing and erroneous data. To this end, we present Colibri, an unsupervised approach that allows discovering links between knowledge bases while improving the quality of the instance data in these knowledge bases. A Colibri iteration begins by generating links between knowledge bases. Then, the approach makes use of these links to detect resources with probably erroneous or missing information. This erroneous or missing information detected by the approach is finally corrected or added.
The third challenge we address is the lack of scalable LD approaches for tackling big geo-spatial knowledge bases. Thus, we present Deterministic Particle-Swarm Optimization (DPSO), a novel load balancing technique for LD on parallel hardware based on particle-swarm optimization. We combine this approach with the Orchid algorithm for geo-spatial linking and evaluate it on real and artificial data sets. The lack of approaches for automatic updating of links of an evolving knowledge base is our fourth challenge. This challenge is addressed in this thesis by the Wombat algorithm. Wombat is a novel approach for the discovery of links between knowledge bases that relies exclusively on positive examples. Wombat is based on generalisation via an upward refinement operator to traverse the space of Link Specifications (LS). We study the theoretical characteristics of Wombat and evaluate it on different benchmark data sets.
The last challenge addressed herein is the lack of automatic approaches for geo-spatial knowledge base enrichment. Thus, we propose Deer, a supervised learning approach based on a refinement operator for enriching Resource Description Framework (RDF) data sets. We show how we can use exemplary descriptions of enriched resources to generate accurate enrichment pipelines. We evaluate our approach against manually defined enrichment pipelines and show that our approach can learn accurate pipelines even when provided with a small number of training examples.
Each of the proposed approaches is implemented and evaluated against state-of-the-art approaches on real and/or artificial data sets. Moreover, all approaches are peer-reviewed and published in a conference or a journal paper. Throughout this thesis, we detail the ideas, implementation and the evaluation of each of the approaches. Moreover, we discuss each approach and present lessons learned. Finally, we conclude this thesis by presenting a set of possible future extensions and use cases for each of the proposed approaches.
|
293 |
An approach to automate the adaptor software generation for tool integration in Application/ Product Lifecycle Management tool chains.Singh, Shikhar January 2016 (has links)
An emerging problem in organisations is that there exist a large number of tools storing data that communicate with each other too often, throughout the process of an application or product development. However, no means of communication without the intervention of a central entity (usually a server) or storing the schema at a central repository exist. Accessing data among tools and linking them is tough and resource intensive. As part of the thesis, we develop a software (also referred to as ‘adaptor’ in the thesis), which, when implemented in the lifecycle management systems, integrates data seamlessly. This will eliminate the need of storing database schemas at a central repository and make the process of accessing data within tools less resource intensive. The adaptor acts as a wrapper to the tools and allows them to directly communicate with each other and exchange data. When using the developed adaptor for communicating data between various tools, the data in relational databases is first converted into RDF format and is then sent or received. Hence, RDF forms the crucial underlying concept on which the software will be based. The Resource description framework (RDF) provides the functionality of data integration irrespective of underlying schemas by treating data as resource and representing it as URIs. The model of RDF is a data model that is used for exchange and communication of data on the Internet and can be used in solving other real world problems like tool integration and automation of communication in relational databases. However, developing this adaptor for every tool requires understanding the individual schemas and structure of each of the tools’ database. This again requires a lot of effort for the developer of the adaptor. So, the main aim of the thesis will be to automate the development of such adaptors. With this automation, the need for anyone to manually assess the database and then develop the adaptor specific to the database is eliminated. Such adaptors and concepts can be used to implement similar solutions in other organisations faced with similar problems. In the end, the output of the thesis is an approachwhich automates the process of generating these adaptors. / Resource Description Framework (RDF) ger funktionaliteten av dataintegration, oberoende av underliggande scheman genom att behandla uppgifter som resurs och representerar det som URI. Modellen för Resource Description Framework är en datamodell som används för utbyte och kommunikation av uppgifter om Internet och kan användas för att lösa andra verkliga problem som integrationsverktyg och automatisering av kommunikation i relationsdatabaser. Ett växande problem i organisationer är att det finns ett stort antal verktyg som lagrar data och som kommunicerar med varandra alltför ofta, under hela processen för ett program eller produktutveckling. Men inga kommunikationsmedel utan ingripande av en central enhet (oftast en server) finns. Åtkomst av data mellan verktyg och länkningar mellan dem är resurskrävande. Som en del av avhandlingen utvecklar vi en programvara (även hänvisad till som "adapter" i avhandlingen), som integrerar data utan större problem. Detta kommer att eliminera behovet av att lagra databasscheman på en central lagringsplats och göra processen för att hämta data inom verktyg mindre resurskrävande. Detta kommer att ske efter beslut om en särskild strategi för att uppnå kommunikation mellan olika verktyg som kan vara en sammanslagning av många relevanta begrepp, genom studier av nya och kommande metoder som kan hjälpa i nämnda scenarier. Med den utvecklade programvaran konverteras först datat i relationsdatabaserna till RDF form och skickas och tas sedan emot i RDF format. Således utgör RDF det viktiga underliggande konceptet för programvaran. Det främsta målet med avhandlingen är att automatisera utvecklingen av ett sådant verktyg (adapter). Med denna automatisering elimineras behovet att av någon manuellt behöver utvärdera databasen och sedan utveckla adaptern enligt databasen. Ett sådant verktyg kan användas för att implementera liknande lösningar i andra organisationer som har liknande problem. Således är resultatet av avhandlingen en algoritm eller ett tillvägagångssätt för att automatisera processen av att skapa adaptern.
|
294 |
A Bayesian learning approach to inconsistency identification in model-based systems engineeringHerzig, Sebastian J. I. 08 June 2015 (has links)
Designing and developing complex engineering systems is a collaborative effort. In Model-Based Systems Engineering (MBSE), this collaboration is supported through the use of formal, computer-interpretable models, allowing stakeholders to address concerns using well-defined modeling languages. However, because concerns cannot be separated completely, implicit relationships and dependencies among the various models describing a system are unavoidable. Given that models are typically co-evolved and only weakly integrated, inconsistencies in the agglomeration of the information and knowledge encoded in the various models are frequently observed. The challenge is to identify such inconsistencies in an automated fashion. In this research, a probabilistic (Bayesian) approach to abductive reasoning about the existence of specific types of inconsistencies and, in the process, semantic overlaps (relationships and dependencies) in sets of heterogeneous models is presented. A prior belief about the manifestation of a particular type of inconsistency is updated with evidence, which is collected by extracting specific features from the models by means of pattern matching. Inference results are then utilized to improve future predictions by means of automated learning. The effectiveness and efficiency of the approach is evaluated through a theoretical complexity analysis of the underlying algorithms, and through application to a case study. Insights gained from the experiments conducted, as well as the results from a comparison to the state-of-the-art have demonstrated that the proposed method is a significant improvement over the status quo of inconsistency identification in MBSE.
|
295 |
在Spark大數據平台上分析DBpedia開放式資料:以電影票房預測為例 / Analyzing DBpedia Linked Open Data (LOD) on Spark:Movie Box Office Prediction as an Example劉文友, Liu, Wen Yu Unknown Date (has links)
近年來鏈結開放式資料 (Linked Open Data,簡稱LOD) 被認定含有大量潛在價值。如何蒐集與整合多元化的LOD並提供給資料分析人員進行資料的萃取與分析,已成為當前研究的重要挑戰。LOD資料是RDF (Resource Description Framework) 的資料格式。我們可以利用SPARQL來查詢RDF資料,但是目前對於大量RDF的資料除了缺少一個高性能且易擴展的儲存和查詢分析整合性系統之外,對於RDF大數據資料分析流程的研究也不夠完備。本研究以預測電影票房為例,使用DBpedia LOD資料集並連結外部電影資料庫 (例如:IMDb),並在Spark大數據平台上進行巨量圖形的分析。首先利用簡單貝氏分類與貝氏網路兩種演算法進行電影票房預測模型實例的建構,並使用貝氏訊息準則 (Bayesian Information Criterion,簡稱BIC) 找到最佳的貝氏網路結構。接著計算多元分類的ROC曲線與AUC值來評估本案例預測模型的準確率。 / Recent years, Linked Open Data (LOD) has been identified as containing large amount of potential value. How to collect and integrate multiple LOD contents for effective analytics has become a research challenge. LOD is represented as a Resource Description Framework (RDF) format, which can be queried through SPARQL language. But large amount of RDF data is lack of a high performance and scalable storage analysis system. Moreover, big RDF data analytics pipeline is far from perfect. The purpose of this study is to exploit the above research issue. A movie box office sale prediction scenario is demonstrated by using DBpedia with external IMDb movie database. We perform the DBpedia big graph analytics on the Apache Spark platform. The movie box office prediction for optimal model selection is first evaluated by BIC. Then, Naïve Bayes and Bayesian Network optimal model’s ROC and AUC values are obtained to justify our approach.
|
296 |
Introduction de raisonnement dans un outil industriel de gestion des connaissancesCarloni, Olivier 24 November 2008 (has links) (PDF)
Le travail de thèse présenté dans ce document porte sur la conception d'un service de validation et d'enrichissement d'annotations pour un outil industriel de gestion des connaissances basé sur le langage des Topic Maps (TM). Un tel service nécessitant la mise en oeuvre de raisonnements sur les connaissances, il a été nécessaire de doter le langage des TM d'une sémantique formelle. Ceci a été réalisé par l'intermédiaire d'une transformation réversible des TM vers le formalisme logique des graphes conceptuels qui dispose d'une représentation graphique des connaissances (les TM pouvant facilement en être munie d'une). La solution a été mise en oeuvre dans deux applications, l'une conçue pour la veille médiatique et l'autre pour la promotion de ressources touristiques. Schématiquement, des annotations sont extraites automatiquement des documents selon le domaine concerné (actualité/économie ou tourisme) puis ajoutées à la base de connaissances. Elles sont ensuite fournies au service d'enrichissement et de validation qui les complète de nouvelles connaissances et décide de leur validité, puis retourne à la base de connaissance le résultat de l'enrichissement et de la validation.
|
297 |
Integrating XML and RDF concepts to achieve automation within a tactical knowledge management environmentMcCarty, George E., Jr. 03 1900 (has links)
Approved for public release, distribution is unlimited / Since the advent of Naval Warfare, Tactical Knowledge Management (KM) has been critical to the success of the On Scene Commander. Today's Tactical Knowledge Manager typically operates in a high stressed environment with a multitude of knowledge sources including detailed sensor deployment plans, rules of engagement contingencies, and weapon delivery assignments. However the WarFighter has placed a heavy reliance on delivering this data with traditional messaging processes while focusing on information organization vice knowledge management. This information oriented paradigm results in a continuation of data overload due to the manual intervention of human resources. Focusing on the data archiving aspect of information management overlooks the advantages of computational processing while delaying the empowerment of the processor as an automated decision making tool. Resource Description Framework (RDF) and XML provide the potential of increased machine reasoning within a KM design allowing the WarFighter to migrate from the dependency on manual information systems to a more computational intensive Knowledge Management environment. However the unique environment of a tactical platform requires innovative solutions to automate the existing naval message architecture while improving the knowledge management process. This thesis captures the key aspects for building a prototype Knowledge Management Model and provides an implementation example for evaluation. The model developed for this analysis was instantiated to evaluate the use of RDF and XML technologies in the Knowledge Management domain. The goal for the prototype included: 1. Processing required technical links in RDF/XML for feeding the KM model from multiple information sources. 2. Experiment with the visualization of Knowledge Management processing vice traditional Information Resource Display techniques. The results from working with the prototype KM Model demonstrated the flexibility of processing all information data under an XML context. Furthermore the RDF attribute format provided a convenient structure for automated decision making based on multiple information sources. Additional research utilizing RDF/XML technologies will eventually enable the WarFighter to effectively make decisions under a Knowledge Management Environment. / Civilian, SPAWAR System Center San Diego
|
298 |
La modélisation d'objets pédagogiques pour une plateforme sémantique d'apprentissage / The modeling of learning objects for a semantic learning platformBalog-Crisan, Radu 13 December 2011 (has links)
Afin de rendre les objets pédagogiques (OP) accessibles, réutilisables et adaptables, il est nécessaire de les modéliser. Outre la forme et la structure, il faut aussi décrire la sémantique des OP. Ainsi, nous proposons un schéma de modélisation d'OP d'après la norme LOM (Learning Object Metadata), en utilisant un modèle de données de type RDF (Ressource Description Framework). Pour encoder, échanger et réutiliser les métadonnées structurées d'OP, nous avons implémenté l'application RDF4LOM (RDF for LOM). Le recours aux outils du Web sémantique nous permet de proposer le prototype d'une plateforme sémantique d'apprentissage (SLCMS), qui valorise à la fois les ressources internes, les OP modélisés avec RDF, ainsi que les ressources externes (wikis, blogs ou encore agendas sémantiques). L'architecture du SLCMS est basée sur un Noyau sémantique capable d'interpréter les métadonnées et de créer des requêtes intelligentes. Pour la description des contraintes sémantiques et des raisonnements sur les OP, nous utilisons les ontologies. Grâce à des ontologies précises et complètes, les OP seront « interprétables » et « compréhensibles » par les machines. Pour le module Quiz sémantique, nous avons modélisé l'ontologie Quiz et l'ontologie LMD. La plateforme sémantique d'apprentissage permet la recherche d'OP pertinents, la génération de parcours personnalisés pour les apprenants et, en perspective, l'adaptabilité aux styles d'apprentissage. / In order to make Learning Objects (LO) accessible, reusable and adaptable, it is necessary to model them. Besides form and structure, one must also define the semantics associated with a given LO. Thus, we propose a modeling scheme for LOs that respects the LOM (Learning Object Metadata) standard and which uses a RDF-based (Resource Description Framework) data model. In order to encode, exchange and reuse such structured metadata for LOs, we have developed the RDF4LOM (RDF for LOM) application. By using Semantic Web tools, we are able to deliver a prototype of a semantic learning platform (SLCMS) that enhances internal resources, LOs modeled with RDF as well as external resources (semantic wikis, blogs or calendars). The architecture of this SLCMS is based upon a semantic Kernel whose role is to interpret metadata and create intelligent queries. We use ontologies, for the description of semantic constraints and reasoning rules concerning the LOs. By means of accurate and complete ontologies, the LOs will be machine-interpretable and also machine-understandable. For the semantic Quiz module, we have developed the Quiz and LMD ontologies. The semantic learning platform enables searching for appropriate LOs, generating personalized learning paths for learners and, as en evolution, adaptation to learning styles.
|
299 |
NEAR-INFRARED SPECTROSCOPY FOR REFUSE DERIVED FUEL : Classification of waste material components using hyperspectral imaging and feasibility study of inorganic chlorine content quantificationŠevčík, Martin January 2019 (has links)
This degree project focused on examining new possible application of near-infrared (NIR) spectroscopy for quantitative and qualitative characterization of refuse derived fuel (RDF). Particularly, two possible applications were examined as part of the project. Firstly, use of NIR hyperspectral imaging for classification of common materials present in RDF. The classification was studied on artificial mixtures of materials commonly present in municipal solid waste and RDF. Data from hyperspectral camera was used as an input for machine learning models to train them, validate them, and test them. Three classification machine learning models were used in the project; partial least-square discriminant analysis (PLS-DA), support vector machine (SVM), and radial basis neural network (RBNN). Best results for classifying the materials into 11 distinct classes were reached for SVM (accuracy 94%), even though its high computational cost makes it not very suitable for real-time deployment. Second best result was reached for RBNN (91%) and the lowest accuracy was recorded for PLS-DA model (88%). On the other hand, the PLS-DA model was the fastest, being 10 times faster than the RBNN and 100 times faster than the SVM. NIR spectroscopy was concluded as a suitable method for identification of most common materials in RDF mix, except for incombustible materials like glass, metals, or ceramics. The second part of the project uncovered a potential in using NIR spectroscopy for identification of inorganic chlorine content in RDF. Experiments were performed on samples of textile impregnated with a water solution of kitchen salt representing NaCl as inorganic chlorine source. Results showed that contents of 0.2-1 wt.% of salt can be identified in absorbance spectra of the samples. Limitation appeared to be water content of the examined samples, as with too large amount of water in the sample, the influence of salt on NIR absorbance spectrum of water was too small to be recognized. / FUDIPO
|
300 |
RequirementX: um a ferramenta para suporte à gerência de requisitos em extreme Programming baseada em mapas conceituaisMartins, Júnior Machado 23 February 2007 (has links)
Made available in DSpace on 2015-03-05T13:58:27Z (GMT). No. of bitstreams: 0
Previous issue date: 23 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Uma das tarefas críticas na confecção de sistemas de software é a elicitação de requisitos, a qual configura uma ação de descoberta de conhecimento. Assim, muitas técnicas são empregadas na tentativa de minimizar conflitos de idéias, conceitos mal formados, interpretações redundantes e omissão de dados; sendo que, para tanto, o uso de cenários, entrevistas, cartões, viewpoints e diagramas de Use Case são utilizados como ferramentas para diminuir a distância entre o técnico e o usuário na definição dos requisitos. Além disso, os Mapas Conceituais têm sido empregados com muita eficiência em tarefas de captura de conhecimento, portanto, este trabalho utiliza esse conceito como forma de organizar, identificar, aprimorar conceitos e definições dos requisitos de um software de forma cooperativa, formatado em User Story da metodologia Extreme Programming (XP). Com esse objetivo, o processo é apoiado por uma ferramenta baseada na web, que automatiza a geração, organização e acompanhamento da captura dos requisitos ge / One of the hardest tasks of building a software system is requirements elicitation, which triggers a knowledge discovery action. Thus, many techniques are used with the intention to minimize idea conflicts, misformed concepts, erroneous interpretations and missing data; In order to achieve this goal, scenarios interviews, User Stories, viewpoints and Use Case diagrams are techniques to reduce the distance between the researcher and the user on requirement elicitation. Concept maps have been used as efficient way to represent knowledge. This research uses concept maps to deal with the organization, identification and improvement of concepts and software requirements definitions in a cooperative way, making use of the User Story format introduced by the Extreme Programming (XP) methodology. The proposed process is supported by a web-based tool, which automates the generation, organization and management of the requirements capture generated in the Concept Maps format
|
Page generated in 0.0688 seconds