Spelling suggestions: "subject:"ddf"" "subject:"fdf""
221 |
Learning OWL Class ExpressionsLehmann, Jens 09 June 2010 (has links)
With the advent of the Semantic Web and Semantic Technologies, ontologies have become one of the most prominent paradigms for knowledge representation and reasoning. The popular ontology language OWL, based on description logics, became a W3C recommendation in 2004 and a standard for modelling ontologies on the Web. In the meantime, many studies and applications using OWL have been reported in research and industrial environments, many of which go beyond Internet usage and employ the power of ontological modelling in other fields such as biology, medicine, software engineering, knowledge management, and cognitive systems.
However, recent progress in the field faces a lack of well-structured ontologies with large amounts of instance data due to the fact that engineering such ontologies requires a considerable investment of resources. Nowadays, knowledge bases often provide large volumes of data without sophisticated schemata. Hence, methods for automated schema acquisition and maintenance are sought. Schema acquisition is closely related to solving typical classification problems in machine learning, e.g. the detection of chemical compounds causing cancer. In this work, we investigate both, the underlying machine learning techniques and their application to knowledge acquisition in the Semantic Web.
In order to leverage machine-learning approaches for solving these tasks, it is required to develop methods and tools for learning concepts in description logics or, equivalently, class expressions in OWL. In this thesis, it is shown that methods from Inductive Logic Programming (ILP) are applicable to learning in description logic knowledge bases. The results provide foundations for the semi-automatic creation and maintenance of OWL ontologies, in particular in cases when extensional information (i.e. facts, instance data) is abundantly available, while corresponding intensional information (schema) is missing or not expressive enough to allow powerful reasoning over the ontology in a useful way. Such situations often occur when extracting knowledge from different sources, e.g. databases, or in collaborative knowledge engineering scenarios, e.g. using semantic wikis. It can be argued that being able to learn OWL class expressions is a step towards enriching OWL knowledge bases in order to enable powerful reasoning, consistency checking, and improved querying possibilities. In particular, plugins for OWL ontology editors based on learning methods are developed and evaluated in this work.
The developed algorithms are not restricted to ontology engineering and can handle other learning problems. Indeed, they lend themselves to generic use in machine learning in the same way as ILP systems do. The main difference, however, is the employed knowledge representation paradigm: ILP traditionally uses logic programs for knowledge representation, whereas this work rests on description logics and OWL. This difference is crucial when considering Semantic Web applications as target use cases, as such applications hinge centrally on the chosen knowledge representation format for knowledge interchange and integration. The work in this thesis can be understood as a broadening of the scope of research and applications of ILP methods. This goal is particularly important since the number of OWL-based systems is already increasing rapidly and can be expected to grow further in the future.
The thesis starts by establishing the necessary theoretical basis and continues with the specification of algorithms. It also contains their evaluation and, finally, presents a number of application scenarios. The research contributions of this work are threefold:
The first contribution is a complete analysis of desirable properties of refinement operators in description logics. Refinement operators are used to traverse the target search space and are, therefore, a crucial element in many learning algorithms. Their properties (completeness, weak completeness, properness, redundancy, infinity, minimality) indicate whether a refinement operator is suitable for being employed in a learning algorithm. The key research question is which of those properties can be combined. It is shown that there is no ideal, i.e. complete, proper, and finite, refinement operator for expressive description logics, which indicates that learning in description logics is a challenging machine learning task. A number of other new results for different property combinations are also proven. The need for these investigations has already been expressed in several articles prior to this PhD work. The theoretical limitations, which were shown as a result of these investigations, provide clear criteria for the design of refinement operators. In the analysis, as few assumptions as possible were made regarding the used description language.
The second contribution is the development of two refinement operators. The first operator supports a wide range of concept constructors and it is shown that it is complete and can be extended to a proper operator. It is the most expressive operator designed for a description language so far. The second operator uses the light-weight language EL and is weakly complete, proper, and finite. It is straightforward to extend it to an ideal operator, if required. It is the first published ideal refinement operator in description logics. While the two operators differ a lot in their technical details, they both use background knowledge efficiently.
The third contribution is the actual learning algorithms using the introduced operators. New redundancy elimination and infinity-handling techniques are introduced in these algorithms. According to the evaluation, the algorithms produce very readable solutions, while their accuracy is competitive with the state-of-the-art in machine learning. Several optimisations for achieving scalability of the introduced algorithms are described, including a knowledge base fragment selection approach, a dedicated reasoning procedure, and a stochastic coverage computation approach.
The research contributions are evaluated on benchmark problems and in use cases. Standard statistical measurements such as cross validation and significance tests show that the approaches are very competitive. Furthermore, the ontology engineering case study provides evidence that the described algorithms can solve the target problems in practice. A major outcome of the doctoral work is the DL-Learner framework. It provides the source code for all algorithms and examples as open-source and has been incorporated in other projects.
|
222 |
[pt] CONTRIBUIÇÕES AO PROBLEMA DE BUSCA POR PALAVRAS-CHAVE EM CONJUNTOS DE DADOS E TRAJETÓRIAS SEMÂNTICAS BASEADOS NO RESOURCE DESCRIPTION FRAMEWORK / [en] CONTRIBUTIONS TO THE PROBLEM OF KEYWORD SEARCH OVER DATASETS AND SEMANTIC TRAJECTORIES BASED ON THE RESOURCE DESCRIPTION FRAMEWORKYENIER TORRES IZQUIERDO 18 May 2021 (has links)
[pt] Busca por palavras-chave fornece uma interface fácil de usar para recuperar
informação. Esta tese contribui para os problemas de busca por palavras chave
em conjuntos de dados sem esquema e trajetórias semânticas baseados
no Resource Description Framework.
Para endereçar o problema da busca por palavras-chave em conjuntos
de dados RDF sem esquema, a tese introduz um algoritmo para traduzir automaticamente
uma consulta K baseada em palavras-chave especificadas pelo
usuário em uma consulta SPARQL Q de tal forma que as respostas que Q retorna
também são respostas para K. O algoritmo não depende de um esquema
RDF, mas sintetiza as consultas SPARQL explorando a semelhança entre os
domínios e contradomínios das propriedades e os conjuntos de instâncias de
classe observados no grafo RDF. O algoritmo estima a similaridade entre conjuntos
com base em sinopses, que podem ser precalculadas, com eficiência, em
uma única passagem sobre o conjunto de dados RDF. O trabalho inclui dois
conjuntos de experimentos com uma implementação do algoritmo. O primeiro
conjunto de experimentos mostra que a implementação supera uma ferramenta
de pesquisa por palavras-chave sobre grafos RDF que explora o esquema RDF
para sintetizar as consultas SPARQL, enquanto o segundo conjunto indica que
a implementação tem um desempenho melhor do que sistemas de pesquisa
por palavras-chave em conjuntos de dados RDF baseados na abordagem de
documentos virtuais denominados TSA+BM25 e TSA+VDP. Finalmente, a
tese também computa a eficácia do algoritmo proposto usando uma métrica
baseada no conceito de relevância do grafo resposta.
O segundo problema abordado nesta tese é o problema da busca por
palavras-chave sobre trajetórias semânticas baseadas em RDF. Trajetórias semânticas
são trajetórias segmentadas em que as paradas e os deslocamentos de
um objeto móvel são semanticamente enriquecidos com dados adicionais. Uma
linguagem de consulta para conjuntos de trajetórias semânticas deve incluir
seletores para paradas ou deslocamentos com base em seus enriquecimentos
e expressões de sequência que definem como combinar os resultados dos seletores
com a sequência que a trajetória semântica define. A tese inicialmente
propõe um framework formal para definir trajetórias semânticas e introduz
expressões de sequências de paradas-e-deslocamentos (stop-and-move sequences),
com sintaxe e semântica bem definidas, que atuam como uma linguagem
de consulta expressiva para trajetórias semânticas. A tese descreve um modelo
concreto de trajetória semântica em RDF, define expressões de sequências
de paradas-e-deslocamentos em SPARQL e discute estratégias para compilar
tais expressões em consultas SPARQL. A tese define consultas sobre trajetórias
semânticas com base no uso de palavras-chave para especificar paradas e
deslocamentos e a adoção de termos com semântica predefinida para compor
expressões de sequência. Em seguida, descreve como compilar tais expressões
em consultas SPARQL, mediante o uso de padrões predefinidos. Finalmente,
a tese apresenta uma prova de conceito usando um conjunto de trajetórias semânticas
construído com conteúdo gerado pelos usuários do Flickr, combinado
com dados da Wikipedia. / [en] Keyword search provides an easy-to-use interface for retrieving information.
This thesis contributes to the problems of keyword search over schema-less
datasets and semantic trajectories based on RDF.
To address the keyword search over schema-less RDF datasets problem,
this thesis introduces an algorithm to automatically translate a user-specified
keyword-based query K into a SPARQL query Q so that the answers Q returns
are also answers for K. The algorithm does not rely on an RDF schema, but it
synthesizes SPARQL queries by exploring the similarity between the property
domains and ranges, and the class instance sets observed in the RDF dataset.
It estimates set similarity based on set synopses, which can be efficiently precomputed
in a single pass over the RDF dataset. The thesis includes two
sets of experiments with an implementation of the algorithm. The first set
of experiments shows that the implementation outperforms a baseline RDF
keyword search tool that explores the RDF schema, while the second set of
experiments indicate that the implementation performs better than the stateof-
the-art TSA+BM25 and TSA+VDP keyword search systems over RDF
datasets based on the virtual documents approach. Finally, the thesis also
computes the effectiveness of the proposed algorithm using a metric based on
the concept of graph relevance.
The second problem addressed in this thesis is the keyword search over
RDF semantic trajectories problem. Stop-and-move semantic trajectories are
segmented trajectories where the stops and moves are semantically enriched
with additional data. A query language for semantic trajectory datasets has
to include selectors for stops or moves based on their enrichments, and
sequence expressions that define how to match the results of selectors with
the sequence the semantic trajectory defines. The thesis first proposes a
formal framework to define semantic trajectories and introduces stop and move
sequence expressions, with well-defined syntax and semantics, which act as
an expressive query language for semantic trajectories. Then, it describes a
concrete semantic trajectory model in RDF, defines SPARQL stop-and-move
sequence expressions, and discusses strategies to compile such expressions
into SPARQL queries. Next, the thesis specifies user-friendly keyword search
expressions over semantic trajectories based on the use of keywords to specify
stop and move queries, and the adoption of terms with predefined semantics
to compose sequence expressions. It then shows how to compile such keyword
search expressions into SPARQL queries. Finally, it provides a proof-of-concept
experiment over a semantic trajectory dataset constructed with user-generated
content from Flickr, combined with Wikipedia data.
|
223 |
[en] DSCEP: AN INFRASTRUCTURE FOR DECENTRALIZED SEMANTIC COMPLEX EVENT PROCESSING / [pt] DSCEP: UMA INFRESTRUTURA DISTRIBUÍDA PARA PROCESSAMENTO DE EVENTOS COMPLEXOS SEMÂNTICOSVITOR PINHEIRO DE ALMEIDA 28 October 2021 (has links)
[pt] Muitas aplicações necessitam do processamento de eventos de streeams de
fontes diferentes em combinação com grandes quantidades de dados de bases de
conhecimento. CEP Semântico é um paradigma especificamente designado
para isso, ele extende o processamento complexo de eventos (CEP) para
adicionar o suporte para a linguagem RDF e utiliza uma rede de operadores
para processar streams RDF em combinação com bases de conhecimento em
RDF. Outra classe popular de sistemas projetados para um proposito similar
são os processadores de stream RDF (RSPs). Estes são sistemas que extendem a
linguagem SPARQL (a linguaguem de query padrão para RDF) para adicionar
a capacidade de fazer queries em stream. CEP Semântico e RSPs possuem
propositos similares porém focam em objetivos diferentes. O CEP Semântico,
foca na scalabilidade e processamento distribuido enquanto os RSPs focam nos
desafios do processamento de streams RDF. Nesta tese, propomos o uso de
RSPs como unidades para processamento de streams RDF dentro do contexto
de CEP Semântico. Apresentamos uma infraestrutura, chamada DSCEP, que
permite o encapsulamento de RSPs existentes em operadores do estilo CEP,
de maneira que estes RSPs possam ser interconectados formando uma rede
de operadores distribuída e descentralizada. DSCEP lida com os desafios e
obstáculos desta interconexão, como comunicação confiável, divisão e agregação
de streams, identificação de eventos e time-stamping, etc., permitindo que os
usuários se concentrem nas consultas. Também discutimos nesta tese como o
DSCEP pode ser usado para diminuir o tempo de processamento de consultas
SPARQL monolíticas, seja dividindo-as em subconsultas e operando-as em
paralelo através do uso de operadores ou seja dividingo a stream de entrada
em multiplos operadores que possuem a mesma query e são executados em
paralelo. Além disso também é avaliado o impacto que a base de conhecimento
possui no tempo de processamento de queires contínuas. / [en] Many applications require the processing of event streams from different
sources in combination with large amounts of background knowledge. Semantic
CEP is a paradigm explicitly designed for that. It extends complex event
processing (CEP) with RDF support and uses a network of operators to process
RDF streams combined with RDF knowledge bases. Another popular class of
systems designed for a similar purpose is the RDF stream processors (RSPs).
These are systems that extend SPARQL (the RDF query language) with stream
processing capabilities. Semantic CEP and RSPs have similar purposes but
focus on different things. The former focuses on scalability and distributed
processing, while the latter tends to focus on the intricacies of RDF stream
processing per se. In this thesis, we propose the use of RSP engines as building
blocks for Semantic CEP. We present an infrastructure, called DSCEP, that
allows the encapsulation of existing RSP engines into CEP-like operators so
that these can be seamlessly interconnected in a distributed, decentralized
operator network. DSCEP handles the hurdles of such interconnection, such
as reliable communication, stream aggregation and slicing, event identification
and time-stamping, etc., allowing users to concentrate on the queries. We also
discuss how DSCEP can be used to speed up monolithic SPARQL queries; by
splitting them into parallel subqueries that can be executed by the operator
network or even by splitting the input stream into multiple operators with the
same query running in parallel. Additionally, we evaluate the impact of the
knowledge base on the processing time of SPARQL continuous queries.
|
224 |
Device profiling analysis in Device-Aware NetworkTsai, Shang-Yuan 12 1900 (has links)
Approved for public release, distribution is unlimited / As more and more devices with a variety of capabilities are Internet-capable, device independence becomes a big issue when we would like the information that we request to be correctly displayed. This thesis introduces and compares how existing standards create a profile that describes the device capabilities to achieve the goal of device independence. After acknowledging the importance of device independence, this paper utilizes the idea to introduce a Device-Aware Network (DAN). DAN provides the infrastructure support for device-content compatibility matching for data transmission. We identify the major components of the DAN architecture and issues associated with providing this new network service. A Device-Aware Network will improve the network's efficiency by preventing unusable data from consuming host and network resources. The device profile is the key issue to achieve this goal. / Captain, Taiwan Army
|
225 |
Analysis of Acid Gas Emissions in the Combustion of the Binder Enhanced d-RDF by Ion ChromatographyJen, Jen-Fon 08 1900 (has links)
Waste-to-energy has become an attractive alternative to landfills. One concern in this development is the release of pollutants in the combustion process. The binder enhanced d-RDF pellets satisfy the requirements of environmental acceptance, chemical/biological stability, and being storeable. The acid gas emissions of combusting d-RDF pellets with sulfur-rich coal were analyzed by ion chromatography and decreased when d-RDF pellets were utilized. The results imply the possibility of using d-RDF pellets to substitute for sulfur-rich coal as fuel, and also substantiate the effectiveness of a binder, calcium hydroxide, in decreasing emissions of SOx.
In order to perform the analysis of the combustion sample, sampling and sample pretreatment methods prior to the IC analysis and the first derivative detection mode in IC are investigated as well. At least two trapping reagents are necessary for collecting acid gases: one for hydrogen halides, and the other for NOx and SOx. Factors affecting the absorption of acid gases are studied, and the strength of an oxidizing agent is the main factor affecting the collection of NOx and SOx. The absorption preference series of acid gases are determined and the absorption models of acid gases in trapping reagents are derived from the analytical results. To prevent the back-flushing of trapping reagents between impingers when leak-checking, a design for the sampling train is suggested, which can be adopted in sample collections. Several reducing agents are studied for pretreating the sample collected in alkali-permanganate media. Besides the recommendation of the hydrogen peroxide solution in EPA method, methanol and formic acid are worth considering as alternate reducing agents in the pretreatment of alkaline-permanganate media prior to IC analysis. The first derivative conductivity detection mode is developed and used in IC system. It is efficient for the detection and quantification of overlapping peaks as well as being applicable for non-overlapping peaks.
|
226 |
Estudo de caso dos possíveis efeitos deletérios causados pelo combustível derivado de resíduos (CDR) em caldeiras voltadas a produção de energia elétrica queimando principalmente bagaço de cana / Case study of the possible deleterious effects caused by refuse derived fuel (RDF) in boilers aimed at producing electric power burning mainly sugar cane bagasseSampaio, Raquel Paschoal 04 May 2015 (has links)
O estado de São Paulo produz cerca de 58.700 t/dia de resíduos dividido pelos seus 645 municípios nas vizinhanças de aproximadamente 170 usinas de açúcar e álcool. Diante deste fato, é evidente o potencial para se fazer o uso consorciado destes dois combustíveis na geração de energia. Este trabalho investigou os possíveis efeitos deletérios que a presença de cloro, flúor, sódio e potássio possam trazer nas caldeiras voltadas para a produção de energia elétrica, utilizando bagaço de cana e combustível derivado de resíduo (CDR). Foi realizada uma busca criteriosa na literatura internacional a fim de possíveis efeitos deletérios em caldeiras de biomassa para a produção de energia em razão do uso consorciado de resíduo, no aspecto da integridade da caldeira, principalmente no papel desempenhado pelos elementos, cloro, flúor, sódio e potássio, e em seguida uma análise criteriosa dos resultados encontrados. Esta análise foi realizada através de um estudo de caso, considerando uma caldeira de leito fluidizado borbulhante (BFB) de 60MW, queimando bagaço e parte do resíduo de uma cidade de 600.000 habitantes. Verificou-se que o resíduo que a cidade produz pode ser transformado em CDR que irá alimentar a caldeira como combustível auxiliar, produzindo energia elétrica de forma limpa e sustentável. Um parâmetro utilizado para se definir a quantidade máxima de CDR queimada na caldeira, foi o cloro específico, calculado pela razão entre o teor de cloro e o poder calorífico inferior (PCI) do combustível. Com base na literatura encontrada, limitou-se o cloro específico em 40 mg/MJ, para que não haja danos a integridade do equipamento. A combustão consorciada de bagaço de cana e CDR pode ser uma alternativa para o estado de São Paulo reduzir o problema da falta de aterros para descarte de resíduos e uma possibilidade para as usinas de açúcar e álcool produzirem energia elétrica por um período mais extenso no ano, economizando bagaço de cana. / The state of São Paulo produces about 58,700 tons/day waste divided by its 645 municipalities in the vicinity of about 170 sugar and alcohol mills. Given this fact, the potential is evident to make the consortium use of these two fuels in power generation. This paper investigated the potential deleterious effects that the presence of chlorine, fluoride, sodium and potassium can bring the boilers focused on the production of electric power using bagasse and refuse derived fuel (RDF). A thorough search in the international literature with the view to possible deleterious effects on biomass boilers for power generation because of consortium use of residue in the boiler integrity aspect, particularly the role played by the elements chlorine, fluorine, sodium and potassium, and then a careful analysis of the results. This analysis was conducted through a case study, considering a bubbling fluidized bed (BFB) boiler of 60 MW, burning bagasse and part of the residue of a city of 600,000 inhabitants. It was found that the residue that the city produces can be turned into RDF which will feed the boiler as an auxiliary fuel, producing electricity in a clean and sustainable manner. A parameter used to set the maximum amount of burned RDF in the boiler, was the specific chlorine, measured by the ratio between the chlorine content and the lower heating value (LHV) of the fuel. Based on the literature found, the specific chlorine was limited to 40 mg/MJ, so there is no damage to the integrity of the equipment. The consortium combustion of bagasse and RDF can be an alternative to the state of São Paulo reduce the problem of landfills for waste disposal and a possibility for the sugar and alcohol mills producing electric power for a longer period in the year, saving bagasse.
|
227 |
Simulation structurale et approche des mécanismes de conduction ionique dans des verres à base de thioborates de lithiumEstournès, Claude 14 April 1992 (has links) (PDF)
Ce travail propose une étude structurale des verres du système B2S3--Li2S par modélisation informatique, diffusion de neutrons, spectroscopie de photoélectrons X. Nous avons ainsi obtenu des informations sur l'ordre à courte et moyenne distances dans ces matériaux. Ces caractérisations ont été corrélées à l'évolution de leurs propriétés physiques (conductivité ionique, densité et température de transition vitreuse) en fonction de leur composition. En particulier, la connaissance de l'ordre local dans les verres binaires les plus modifiés a permis de proposer un modèle de conduction ionique, basé sur ceux utilisés pour les cristaux ioniques, conduisant à une énergie d'activation comporable à celle obtenue expérimentalement.
|
228 |
Construction d'un Web sémantique multi-points de vueBach thanh, Lê 10 1900 (has links) (PDF)
Dans cette thèse, nous étudions les problèmes de l'hétérogénéité et du consensus dans un Web sémantique d'entreprise. Dans le Web sémantique, une extension du Web actuel, la sémantique des ressources est rendue explicite pour que les machines et les agents puissent les « comprendre » et les traiter automatiquement, afin de faciliter les tâches des utilisateurs finaux. Un Web sémantique d'entreprise est un tel web sémantique dédié à une entreprise, une organisation. L'objectif de cette thèse est de permettre la construction et l'exploitation d'un tel Web sémantique, dans une organisation hétérogène comportant différentes sources de connaissances et différentes catégories d'utilisateurs, sans éliminer l'hétérogénéité mais en faisant cohabiter entre l'hétérogénéité et le consensus dans l'organisation tout entière. Dans la première partie, nous approfondissons le problème de l'hétérogénéité des ontologies. L'ontologie est un des éléments fondamentaux dans le Web sémantique. Plusieurs ontologies différentes peuvent co-exister dans une organisation hétérogène. Pour faciliter l'échange des informations et des connaissances encodées dans différentes ontologies, nous étudions des algorithmes permettant d'aligner des ontologies déjà existantes. Les algorithmes proposés permettent de mettre en correspondance les ontologies représentées dans les langages RDF(S) et OWL recommandés par le W3C pour le Web sémantique. Ces algorithmes sont évalués grâce à des campagnes d'évaluation des outils d'alignement d'ontologies. Dans la deuxième partie, nous nous intéressons au problème de la construction de nouvelles ontologies dans une organisation hétérogène mais en prenant en compte différents points de vue, différentes terminologies des personnes, des groupes voire des communautés diverses au sein de cette organisation. Une telle ontologie, appelée ontologie multi-points de vue, permet de faire cohabiter à la fois l'hétérogénéité et le consensus dans une organisation hétérogène. Nous proposons un modèle de représentation des connaissances multi-points de vue, appelé MVP, et un langage d'ontologie multi-points de vue, qui est une extension du langage d'ontologie OWL, appelé MVP-OWL, pour permettre de construire et d'exploiter des ontologies multi-points de vue dans un Web sémantique d'entreprise.
|
229 |
An Atom-Probe Tomography Study of Phase Separation in Fe-Cr Based SteelsZhou, Jing January 2014 (has links)
Stainless steels are very important engineering materials in a variety of applications such as in the food industry and nuclear power plants due to their combination of good mechanical properties and high corrosion resistance. However, ferrite-containing stainless steels are sensitive to the so-called ‘475°C embrittlement’, which is induced by phase separation of the ferrite phase, where it decomposes into Fe-rich ferrite (α) and Cr-rich ferrite (α'). The phase separation is accompanied with a severe loss of toughness. Therefore, the upper service temperature of ferrite-containing stainless steels in industrial applications has been limited to around 250°. In the present work, Fe-Cr based steels were mainly investigated by atom probe tomography. A new method based on the radial distribution function (RDF) was proposed to quantitatively evaluate both the wavelength and amplitude of phase separation in Fe-Cr alloys from the atom probe tomography data. Moreover, a simplified equation was derived to calculate the amplitude of phase separation. The wavelength and amplitude was compared with evaluations using the auto-correlation function (ACF) and Langer-Bar-on-Miller (LBM) method, respectively. The results show that the commonly used LBM method underestimates the amplitude of phase separation and the wavelengths obtained by RDF shows a good exponential relation with aging time which is expected from the theory. The RDF is also an effective method in detecting the phenomena of clustering and elemental partitioning. Furthermore, atom probe tomography and the developed quantitative analysis method have been applied to investigate the influence of different factors on the phase separation in Fe-Cr based alloys by the help of mainly mechanical property tests and atom probe tomography analysis. The study shows that: (1) the external tensile stress during aging enhances the phase separation in ferrite. (2) Phase separation in weld bead metals decomposes more rapidly than both the heat-affected-zone metals and the base metals mainly due to the high density of dislocations in the welding bead metals which could facilitate the diffusion. (3) The results show that Ni and Mn can enhance the phase separation comparing to the binary Fe-Cr alloy whereas Cu forms clusters during aging. (4) Initial clustering of Cr atoms was found after homogenization. Two factors, namely, clustering of Cr above the miscibility gap and clustering during quenching was suggested as the two responsible mechanisms. (5) The homogenization temperatures significantly influence the evolution of phase separation in Fe-46.5at.%Cr. / <p>QC 20140910</p> / Spinodal Project
|
230 |
SemProj: ein Semantic Web - basiertes System zur Unterstützung von Workflow- und ProjektmanagementLanger, André. Gaedke, Martin. January 2008 (has links)
Chemnitz, Techn. Univ., Diplomarb., [2008]. / Nebent.: Semantic Web - basiertes System zur Unterstützung von Workflow- und Projektmanagement.
|
Page generated in 0.0535 seconds