Spelling suggestions: "subject:"[een] REFINEMENT"" "subject:"[enn] REFINEMENT""
481 |
Strukturní analýza vybraných silicidů přechodných kovů pomocí rentgenové difrakce a dynamického upřesňování dat z elektronové difrakce / Structure analysis of some transition metal silicides using X-ray diffraction and dynamical refinement against electron diffraction dataAntunes Corrêa, Cinthia January 2017 (has links)
Title: Structure analysis of some transition metal silicides using X-ray diffraction and dynamical refinement against electron diffraction data Author: Cinthia Antunes Corrˆea Department: Physics of Materials Supervisor: prof. RNDr. Miloš Janeček, CSc., Department of Physics of Materials Abstract: This thesis presents the crystal structure analysis of several transition metal silicides. The crystal structures were studied primarily by precession electron diffraction tomography (PEDT) employing the dynamical refinement, a method recently developed for accurate crystal structure refinement of electron diffraction data. The optimal values of the parameters of the method were proposed based on the comparison between the dynamical refinement of PEDT data and a high- quality reference structure. We present the results of the comparison using a Ni2Si nanowire with the diameter of 15 nm. The average atomic distance between the model obtained by the dynamical refinement on PEDT data and the one by single crystal X-ray diffraction was 0.006 ˚A. Knowing the accuracy and limitations of the method, the crystal structure of Ni3Si2 was redetermined on a nanowire with 35 nm of diameter. The model obtained had an average error in the atomic posi- tions of 0.006 ˚A. These results show that the accuracy achieved by the dynamical...
|
482 |
Neural-Symbolic IntegrationBader, Sebastian 05 October 2009 (has links)
In this thesis, we discuss different techniques to bridge the gap between two different approaches to artificial intelligence: the symbolic and the connectionist paradigm. Both approaches have quite contrasting advantages and disadvantages. Research in the area of neural-symbolic integration aims at bridging the gap between them.
Starting from a human readable logic program, we construct connectionist systems, which behave equivalently. Afterwards, those systems can be trained, and later the refined knowledge be extracted.
|
483 |
Tau-Equivalences and Refinement for Petri Nets Based DesignTarasyuk, Igor V. 27 November 2012 (has links)
The paper is devoted to the investigation of behavioral equivalences of concurrent systems modeled by Petri nets with silent transitions. Basic τ-equivalences and back-forth τ-bisimulation equivalences known from the literature are supplemented by new ones, giving rise to complete set of equivalence notions in interleaving / true concurrency and linear / branching time semantcis. Their interrelations are examined for the general class of nets as well as for their subclasses of nets without siltent transitions and sequential nets (nets without concurrent transitions). In addition, the preservation of all the equivalence notions by refinements (allowing one to consider the systems to be modeled on a lower abstraction levels) is investigated.
|
484 |
Croissance cristalline, structure et propriétés de transport thermique des cuprates unidimensionnels Sr2CuO3, SrCuO2 et La5Ca9Cu24O41 / Crystal growth, structure and heat transport properties of one-dimensional cuprates Sr2CuO3, SrCuO2 and La5Ca9Cu24O41Saint-Martin, Romuald 28 September 2012 (has links)
Les nouvelles technologies mises en œuvre actuellement suscitent des demandes croissantes auprès de l’industrie électronique dont la capacité des circuits électroniques et de leurs microprocesseurs croît de façon explosive en suivant la loi de Moore. Le nombre croissant de transistors par unité de surface entraîne des échauffements considérables qui sont nuisibles au bon fonctionnement des systèmes et posent des problèmes d’évacuation de la chaleur générée, de façon très localisée, dans les composants électroniques. Afin de maîtriser les flux de chaleur créés, il est indispensable d’utiliser des matériaux nouveaux capables de conduire très rapidement et efficacement, c’est à dire de façon unidirectionnelle, la chaleur vers un puits thermique. Les travaux présentés dans cette thèse s’inscrivent dans cette problématique et proposent l’étude de matériaux, isolants électriques, afin d’éviter des courts circuits dans la fabrication de composants électroniques, mais aussi présentant une conductivité thermique fortement anisotrope afin d’évacuer la chaleur dans une seule direction. Pour cela des matériaux très conducteurs, à l’état monocristallin, sont nécessaires. Pour réaliser des mesures de conductivité thermique dans les meilleures conditions, de tels échantillons, d’excellente qualité et parfaitement homogènes ont été synthétisés. Pour obtenir une telle qualité d’échantillons, la méthode de la zone solvante (TSZM : Travelling Solvent Zone Method) a été utilisée. Cette méthode de croissance cristalline, n’utilisant pas de creuset, permet l’obtention de monocristaux exempts d’impuretés, de plusieurs centimètres de longueur. Les matériaux étudiés dans ce travail sont les cuprates de basse dimensionnalité Sr2CuO3, SrCuO2 et La5Ca9Cu24O41 présentant dans leur structure un arrangement d’ions cuivre Cu2+, de spin ½, sous forme de chaînes linéaires ou d’échelles, présentant un caractère 1D marqué. Leur conductivité thermique, dans la direction 1D, est décrite par la somme de deux contributions, l’une, phononique et, l’autre, d’origine magnétique, liée aux spins des ions cuivre. Pour obtenir une meilleure compréhension des différents mécanismes d’interaction en compétition, l’influence de la pureté de ces composés ainsi que celle du dopage sur le site des ions Cu2+ sur la conduction thermique d’origine magnétique, a été étudiée. La pureté des échantillons joue un grand rôle, à basse température, sur la conductivité thermique magnétique du fait d’une diminution des interactions spinons-défauts. Par ailleurs, une étude structurale par diffraction des rayons X et de neutrons sur chacun des composés a été réalisée et a mis en évidence la présence de distorsions dans la structure du composé La5Ca9Cu24O41. / Today’s new technologies bring increasing demands to the electronics industry whose capacity of electronic circuits and related microprocessors increases very rapidly, following Moore’s law. The increasing number of transistors per unit area brings about significant heating which may be harmful to the good functioning of the systems and creates problems in the evacuation of the very localized heat generated in the electronic components. In order to control the heat flow which is produced, it is essential to use new materials able to conduct rapidly and efficiently, i. e. unidirectionally, the heat toward a heat sink. The present thesis work deals with the above described issues and presents the study of materials which have to be insulating in order to avoid short circuits in the electronic components and also exhibit a strong anisotropy of the thermal conductivity in order to evacuate the heat exclusively in one direction. Single crystals are therefore required. In order to realize thermal conductivity measurements in the best conditions, perfect homogeneous single crystals of excellent quality were synthesized by the Travelling Solvent Zone Method. This no-crucible crystal growth method allows the synthesis of impurity-free single crystals several cm long. The investigated materials are the low dimensional cuprates Sr2CuO3, SrCuO2 and La5Ca9Cu24O41 exhibiting in their structures an alignment of Cu2+ ions of spin ½ as linear chains or ladders, showing thus a distinct 1D character. Their thermal conductivity in the 1D direction is described as the sum of two contributions, one phononic and the other of magnetic origin. In order to obtain a better understanding of the different competitive interaction mechanisms, the influence on thermal conductivity, of the purity of the compounds and also of doping on the copper site has been investigated. Furthermore, structural refinement was done (X-ray and neutron diffraction) and has permitted to highlight distortions in the La5Ca9Cu24O41 samples
|
485 |
Approaches to accommodate remeshing in shape optimizationWilke, Daniel Nicolas 20 January 2011 (has links)
This study proposes novel optimization methodologies for the optimization of problems that reveal non-physical step discontinuities. More specifically, it is proposed to use gradient-only techniques that do not use any zeroth order information at all for step discontinuous problems. A step discontinuous problem of note is the shape optimization problem in the presence of remeshing strategies, since changes in mesh topologies may - and normally do - introduce non-physical step discontinuities. These discontinuities may in turn manifest themselves as non-physical local minima in which optimization algorithms may become trapped. Conventional optimization approaches for step discontinuous problems include evolutionary strategies, and design of experiment (DoE) techniques. These conventional approaches typically rely on the exclusive use of zeroth order information to overcome the discontinuities, but are characterized by two important shortcomings: Firstly, the computational demands of zero order methods may be very high, since many function values are in general required. Secondly, the use of zero order information only does not necessarily guarantee that the algorithms will not terminate in highly unfit local minima. In contrast, the methodologies proposed herein use only first order information, rather than only zeroth order information. The motivation for this approach is that associated gradient information in the presence of remeshing remains accurately and uniquely computable, notwithstanding the presence of discontinuities. From a computational effort point of view, a gradient-only approach is of course comparable to conventional gradient based techniques. In addition, the step discontinuities do not manifest themselves as local minima. / Thesis (PhD)--University of Pretoria, 2010. / Mechanical and Aeronautical Engineering / unrestricted
|
486 |
DEVELOPING MULTIPRONGED MODELS TO ENHANCE EFFECTIVENESS OF THE MANAGEMENT SYSTEM OF QUALITY CONTROL LABORATORIES. ADDITIONAL FOCUS ON SYNTHESIS AND CHARACTERIZATION FOR 5 NEW SALTS OF BEDAQUILINEMercy A Okezue (12436116) 20 April 2022 (has links)
<p> </p>
<p>A multidisciplinary study that evaluated Quality Control (QC) laboratory (lab) accreditation, and a salt screen for bedaquiline. Medicines testing facilities always seek to ensure the accuracy of data from their QC labs by attaining accreditation. This research proposed that an understanding of the cross-linkages in the requirements for implementing the 2 most widely used lab standards will facilitate testing efficiencies, and reduce the risks of accreditation failures. For the salt project, the study proposed that new salts of bedaquiline will be formed from acid-base reactions following the pKa rule. Characterizing the salts will provide specifications for the new molecular entities, and form a selection-criteria for a lead candidate.</p>
<p>The research reviewed 2 lab standards: the ISO/IEC17025:2017 and the WHO Good Practices for Pharmaceutical QC labs, and identified the areas of overlap in their requirements. It then developed and tested affordable models that mitigate the 3 identified areas of high risks to lab accreditation. Additionally, it mixed<em> equimolar amounts of bedaquiline base with select counterions that have ≥ 2 pKa units in organic solvents, to yield salts</em>. ICHQ6 guidance was used to characterize the new salts.</p>
<p>The highest risks to laboratory accreditation were linked to 3 quality system metrics, namely: ‘Proficiency Testing’, ‘Validation’, and ‘Measurement Traceability’. Using the identified areas of overlap in the 2 laboratory standards, this research provided tutorial videos, a competency matrix, and some instrument validation data, to optimize the requirements for lab accreditation. For the salt screen, five new candidates were synthesized as alternatives to the existing fumarate salt of bedaquiline. The results of their physicochemical properties were used for selecting a lead moiety.</p>
<p>The research provided evidence that the multipronged models developed will improve efficiencies in QC labs, and increase their chances of attaining international accreditations. It also discovered the best modes for synthesizing the new salts of bedaquiline, and provided critical data to help Pharma make an informed choice for a lead candidate.</p>
|
487 |
Building Information Extraction and Refinement from VHR Satellite Imagery using Deep Learning TechniquesBittner, Ksenia 26 March 2020 (has links)
Building information extraction and reconstruction from satellite images is an essential task for many applications related to 3D city modeling, planning, disaster management, navigation, and decision-making. Building information can be obtained and interpreted from several data, like terrestrial measurements, airplane surveys, and space-borne imagery. However, the latter acquisition method outperforms the others in terms of cost and worldwide coverage: Space-borne platforms can provide imagery of remote places, which are inaccessible to other missions, at any time. Because the manual interpretation of high-resolution satellite image is tedious and time consuming, its automatic analysis continues to be an intense field of research. At times however, it is difficult to understand complex scenes with dense placement of buildings, where parts of buildings may be occluded by vegetation or other surrounding constructions, making their extraction or reconstruction even more difficult. Incorporation of several data sources representing different modalities may facilitate the problem. The goal of this dissertation is to integrate multiple high-resolution remote sensing data sources for automatic satellite imagery interpretation with emphasis on building information extraction and refinement, which challenges are addressed in the following: Building footprint extraction from Very High-Resolution (VHR) satellite images is an important but highly challenging task, due to the large diversity of building appearances and relatively low spatial resolution of satellite data compared to airborne data. Many algorithms are built on spectral-based or appearance-based criteria from single or fused data sources, to perform the building footprint extraction. The input features for these algorithms are usually manually extracted, which limits their accuracy. Based on the advantages of recently developed Fully Convolutional Networks (FCNs), i.e., the automatic extraction of relevant features and dense classification of images, an end-to-end framework is proposed which effectively combines the spectral and height information from red, green, and blue (RGB), pan-chromatic (PAN), and normalized Digital Surface Model (nDSM) image data and automatically generates a full resolution binary building mask. The proposed architecture consists of three parallel networks merged at a late stage, which helps in propagating fine detailed information from earlier layers to higher levels, in order to produce an output with high-quality building outlines. The performance of the model is examined on new unseen data to demonstrate its generalization capacity.
The availability of detailed Digital Surface Models (DSMs) generated by dense matching and representing the elevation surface of the Earth can improve the analysis and interpretation of complex urban scenarios. The generation of DSMs from VHR optical stereo satellite imagery leads to high-resolution DSMs which often suffer from mismatches, missing values, or blunders, resulting in coarse building shape representation. To overcome these problems, a methodology based on conditional Generative Adversarial Network (cGAN) is developed for generating a good-quality Level of Detail (LoD) 2 like DSM with enhanced 3D object shapes directly from the low-quality photogrammetric half-meter resolution satellite DSM input. Various deep learning applications benefit from multi-task learning with multiple regression and classification objectives by taking advantage of the similarities between individual tasks. Therefore, an observation of such influences for important remote sensing applications such as realistic elevation model generation and roof type classification from stereo half-meter resolution satellite DSMs, is demonstrated in this work. Recently published deep learning architectures for both tasks are investigated and a new end-to-end cGAN-based network is developed, which combines different models that provide the best results for their individual tasks.
To benefit from information provided by multiple data sources, a different cGAN-based work-flow is proposed where the generative part consists of two encoders and a common decoder which blends the intensity and height information within one network for the DSM refinement task. The inputs to the introduced network are single-channel photogrammetric DSMs with continuous values and pan-chromatic half-meter resolution satellite images. Information fusion from different modalities helps in propagating fine details, completes inaccurate or missing 3D information about building forms, and improves the building boundaries, making them more rectilinear.
Lastly, additional comparison between the proposed methodologies for DSM enhancements is made to discuss and verify the most beneficial work-flow and applicability of the resulting DSMs for different remote sensing approaches.
|
488 |
Modélisation et simulation Eulériennes des écoulements diphasiques à phases séparées et dispersées : développement d’une modélisation unifiée et de méthodes numériques adaptées au calcul massivement parallèle / Eulerian modeling and simulations of separated and disperse two-phase flows : development of a unified modeling approach and associated numerical methods for highly parallel computationsDrui, Florence 07 July 2017 (has links)
Dans un contexte industriel, l’utilisation de modèles diphasiques d’ordre réduit est nécessaire pour pouvoir effectuer des simulations numériques prédictives d’injection de combustible liquide dans les chambres de combustion automobiles et aéronautiques, afin de concevoir des équipements plus performants et moins polluants. Le processus d’atomisation du combustible, depuis sa sortie de l’injecteur sous un régime de phases séparées, jusqu’au brouillard de gouttelettes dispersées, est l’un des facteurs clés d’une combustion de bonne qualité. Aujourd’hui cependant, la prise en compte de toutes les échelles physiques impliquées dans ce processus nécessite une avancée majeure en termes de modélisation, de méthodes numériques et de calcul haute performance (HPC). Ces trois aspects sont abordés dans cette thèse. Premièrement, des modèles de mélange, dérivés par le principe variationnel de Hamilton et le second principe de la thermodynamique sont étudiés. Ils sont alors enrichis afin de pouvoir décrire des pulsations des interfaces au niveau de la sous-échelle. Des comparaisons avec des données expérimentales dans un contexte de milieux à bulles permettent de vérifier la cohérence physique des modèles et de valider la méthodologie. Deuxièmement, une stratégie de discrétisation est développée, basée sur une séparation d’opérateur, permettant la résolution indépendante de la partie convective des systèmes à l’aide de solveurs de Riemann approchés standards et les termes sources à l’aide d’intégrateurs d’équations différentielles ordinaires. Ces différentes méthodes répondent aux particularités des systèmes diphasiques compressibles, ainsi qu’au choix de l’utilisation de maillages adaptatifs (AMR). Pour ces derniers, une stratégie spécifique est développée : il s’agit du choix de critères de raffinement et de la projection de la solution d’une grille à une autre (plus fine ou plus grossière). Enfin, l’utilisation de l’AMR dans un cadre HPC est rendue possible grâce à la bibliothèque AMR p4est, laquelle a montré une excellente scalabilité jusqu’à plusieurs milliers de coeurs de calcul. Un code applicatif, CanoP, a été développé et permet de simuler des écoulements fluides avec des méthodes de volumes finis sur des maillages AMR. CanoP pourra être utilisé pour des futures simulations d’atomisation liquide. / In an industrial context, reduced-order two-phase models are used in predictive simulations of the liquid fuel injection in combustion chambers and help designing more efficient and less polluting devices. The combustion quality strongly depends on the atomization process, starting from the separated phase flow at the exit of the nozzle down to the cloud of fuel droplets characterized by a disperse-phase flow. Today, simulating all the physical scales involved in this process requires a major breakthrough in terms of modeling, numerical methods and high performance computing (HPC). These three aspects are addressed in this thesis. First, we are interested in mixture models, derived through Hamilton’s variational principle and the second principle of thermodynamics. We enrich these models, so that they can describe sub-scale pulsations mechanisms. Comparisons with experimental data in a context of bubbly flows enables to assess the models and the methodology. Based on a geometrical study of the interface evolution, new tracks are then proposed for further enriching the mixture models using the same methodology. Second, we propose a numerical strategy based on finite volume methods composed of an operator splitting strategy, approximate Riemann solvers for the resolution of the convective part and specific ODE solvers for the source terms. These methods have been adapted so as to handle several difficulties related to two-phase flows, like the large acoustic impedance ratio, the stiffness of the source terms and low-Mach issues. Moreover, a cell-based Adaptive Mesh Refinement (AMR) strategy is considered. This involves to develop refinement criteria, the setting of the solution values on the new grids and to adapt the standard methods for regular structured grids to non-conforming grids. Finally, the scalability of this AMR tool relies on the p4est AMR library, that shows excellent scalability on several thousands cores. A code named CanoP has been developed and enables to solve fluid dynamics equations on AMR grids. We show that CanoP can be used for future simulations of the liquid atomization.
|
489 |
With a new refinement paradigm towards anisotropic adaptive FEM on triangular meshesSchneider, Rene 15 October 2013 (has links)
Adaptive anisotropic refinement of finite element meshes allows to reduce the computational effort required to achieve a specified accuracy of the solution of a PDE problem.
We present a new approach to adaptive refinement and demonstrate that this allows to construct algorithms which generate very flexible and efficient anisotropically refined meshes, even improving the convergence order compared to adaptive isotropic refinement if the problem permits.:1 Introduction
2 Extension of FEM ansatz spaces
3 Optimality of the extension
4 Application 1: graded refinement
5 Application 2: anisotropic refinement in 2D
6 Numerical experiments
7 Conclusions and outlook
|
490 |
Learning OWL Class ExpressionsLehmann, Jens 09 June 2010 (has links)
With the advent of the Semantic Web and Semantic Technologies, ontologies have become one of the most prominent paradigms for knowledge representation and reasoning. The popular ontology language OWL, based on description logics, became a W3C recommendation in 2004 and a standard for modelling ontologies on the Web. In the meantime, many studies and applications using OWL have been reported in research and industrial environments, many of which go beyond Internet usage and employ the power of ontological modelling in other fields such as biology, medicine, software engineering, knowledge management, and cognitive systems.
However, recent progress in the field faces a lack of well-structured ontologies with large amounts of instance data due to the fact that engineering such ontologies requires a considerable investment of resources. Nowadays, knowledge bases often provide large volumes of data without sophisticated schemata. Hence, methods for automated schema acquisition and maintenance are sought. Schema acquisition is closely related to solving typical classification problems in machine learning, e.g. the detection of chemical compounds causing cancer. In this work, we investigate both, the underlying machine learning techniques and their application to knowledge acquisition in the Semantic Web.
In order to leverage machine-learning approaches for solving these tasks, it is required to develop methods and tools for learning concepts in description logics or, equivalently, class expressions in OWL. In this thesis, it is shown that methods from Inductive Logic Programming (ILP) are applicable to learning in description logic knowledge bases. The results provide foundations for the semi-automatic creation and maintenance of OWL ontologies, in particular in cases when extensional information (i.e. facts, instance data) is abundantly available, while corresponding intensional information (schema) is missing or not expressive enough to allow powerful reasoning over the ontology in a useful way. Such situations often occur when extracting knowledge from different sources, e.g. databases, or in collaborative knowledge engineering scenarios, e.g. using semantic wikis. It can be argued that being able to learn OWL class expressions is a step towards enriching OWL knowledge bases in order to enable powerful reasoning, consistency checking, and improved querying possibilities. In particular, plugins for OWL ontology editors based on learning methods are developed and evaluated in this work.
The developed algorithms are not restricted to ontology engineering and can handle other learning problems. Indeed, they lend themselves to generic use in machine learning in the same way as ILP systems do. The main difference, however, is the employed knowledge representation paradigm: ILP traditionally uses logic programs for knowledge representation, whereas this work rests on description logics and OWL. This difference is crucial when considering Semantic Web applications as target use cases, as such applications hinge centrally on the chosen knowledge representation format for knowledge interchange and integration. The work in this thesis can be understood as a broadening of the scope of research and applications of ILP methods. This goal is particularly important since the number of OWL-based systems is already increasing rapidly and can be expected to grow further in the future.
The thesis starts by establishing the necessary theoretical basis and continues with the specification of algorithms. It also contains their evaluation and, finally, presents a number of application scenarios. The research contributions of this work are threefold:
The first contribution is a complete analysis of desirable properties of refinement operators in description logics. Refinement operators are used to traverse the target search space and are, therefore, a crucial element in many learning algorithms. Their properties (completeness, weak completeness, properness, redundancy, infinity, minimality) indicate whether a refinement operator is suitable for being employed in a learning algorithm. The key research question is which of those properties can be combined. It is shown that there is no ideal, i.e. complete, proper, and finite, refinement operator for expressive description logics, which indicates that learning in description logics is a challenging machine learning task. A number of other new results for different property combinations are also proven. The need for these investigations has already been expressed in several articles prior to this PhD work. The theoretical limitations, which were shown as a result of these investigations, provide clear criteria for the design of refinement operators. In the analysis, as few assumptions as possible were made regarding the used description language.
The second contribution is the development of two refinement operators. The first operator supports a wide range of concept constructors and it is shown that it is complete and can be extended to a proper operator. It is the most expressive operator designed for a description language so far. The second operator uses the light-weight language EL and is weakly complete, proper, and finite. It is straightforward to extend it to an ideal operator, if required. It is the first published ideal refinement operator in description logics. While the two operators differ a lot in their technical details, they both use background knowledge efficiently.
The third contribution is the actual learning algorithms using the introduced operators. New redundancy elimination and infinity-handling techniques are introduced in these algorithms. According to the evaluation, the algorithms produce very readable solutions, while their accuracy is competitive with the state-of-the-art in machine learning. Several optimisations for achieving scalability of the introduced algorithms are described, including a knowledge base fragment selection approach, a dedicated reasoning procedure, and a stochastic coverage computation approach.
The research contributions are evaluated on benchmark problems and in use cases. Standard statistical measurements such as cross validation and significance tests show that the approaches are very competitive. Furthermore, the ontology engineering case study provides evidence that the described algorithms can solve the target problems in practice. A major outcome of the doctoral work is the DL-Learner framework. It provides the source code for all algorithms and examples as open-source and has been incorporated in other projects.
|
Page generated in 0.049 seconds