• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 232
  • 88
  • 50
  • 25
  • 14
  • 6
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 525
  • 144
  • 97
  • 74
  • 64
  • 64
  • 60
  • 58
  • 53
  • 49
  • 45
  • 45
  • 43
  • 41
  • 35
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

An Improved Ghost-cell Immersed Boundary Method for Compressible Inviscid Flow Simulations

Chi, Cheng 05 1900 (has links)
This study presents an improved ghost-cell immersed boundary approach to represent a solid body in compressible flow simulations. In contrast to the commonly used approaches, in the present work ghost cells are mirrored through the boundary described using a level-set method to farther image points, incorporating a higher-order extra/interpolation scheme for the ghost cell values. In addition, a shock sensor is in- troduced to deal with image points near the discontinuities in the flow field. Adaptive mesh refinement (AMR) is used to improve the representation of the geometry efficiently. The improved ghost-cell method is validated against five test cases: (a) double Mach reflections on a ramp, (b) supersonic flows in a wind tunnel with a forward- facing step, (c) supersonic flows over a circular cylinder, (d) smooth Prandtl-Meyer expansion flows, and (e) steady shock-induced combustion over a wedge. It is demonstrated that the improved ghost-cell method can reach the accuracy of second order in L1 norm and higher than first order in L∞ norm. Direct comparisons against the cut-cell method demonstrate that the improved ghost-cell method is almost equally accurate with better efficiency for boundary representation in high-fidelity compressible flow simulations. Implementation of the improved ghost-cell method in reacting Euler flows further validates its general applicability for compressible flow simulations.
152

Multigrid with Cache Optimizations on Adaptive Mesh Refinement Hierarchies

Thorne Jr., Daniel Thomas 01 January 2003 (has links)
This dissertation presents a multilevel algorithm to solve constant and variable coeffcient elliptic boundary value problems on adaptively refined structured meshes in 2D and 3D. Cacheaware algorithms for optimizing the operations to exploit the cache memory subsystem areshown. Keywords: Multigrid, Cache Aware, Adaptive Mesh Refinement, Partial Differential Equations, Numerical Solution.
153

Real-Time Workload Models : Expressiveness vs. Analysis Efficiency

Stigge, Martin January 2014 (has links)
The requirements for real-time systems in safety-critical applications typically contain strict timing constraints. The design of such a system must be subject to extensive validation to guarantee that critical timing constraints will never be violated while the system operates. A mathematically rigorous technique to do so is to perform a schedulability analysis for formally verifying models of the computational workload. Different workload models allow to describe task activations at different levels of expressiveness, ranging from traditional periodic models to sophisticated graph-based ones. An inherent conflict arises between the expressiveness and analysis efficiency of task models. The more expressive a task model is, the more accurately it can describe a system design, reducing over-approximations and thus minimizing wasteful over-provisioning of system resources. However, more expressiveness implies higher computational complexity of corresponding analysis methods. Consequently, an ideal model provides the highest possible expressiveness for which efficient exact analysis methods exist. This thesis investigates the trade-off between expressiveness and analysis efficiency. A new digraph-based task model is introduced, which generalizes all previously proposed models that can be analyzed in pseudo-polynomial time without using any analysis-specific over-approximations. We develop methods allowing to efficiently analyze variants of the model despite their strictly increased expressiveness. A key contribution is the notion of path abstraction which enables efficient graph traversal algorithms. We demonstrate tractability borderlines for different classes of schedulers, namely static priority and earliest-deadline first schedulers, by establishing hardness results. These hardness proofs provide insights about the inherent complexity of developing efficient analysis methods and indicate fundamental difficulties of the considered schedulability problems. Finally, we develop a novel abstraction refinement scheme to cope with combinatorial explosion and apply it to schedulability and response-time analysis problems. All methods presented in this thesis are extensively evaluated, demonstrating practical applicability.
154

Integrated adaptive numerical methods for transient two-phase flow in heterogeneous porous media

Chueh, Chih-Che 26 January 2011 (has links)
Transient multi-phase flow problems in porous media are ubiquitous in engineering and environmental systems and processes; examples include heat exchangers, reservoir simulation, environmental remediation, magma flow in the earth crust and water management in porous electrodes of PEM fuel cells. This thesis focuses on the development of accurate and computationally efficient numerical models to simulate such flows. The research challenges addressed in this work fall in two areas. For a numerical standpoint, conventional numerical methods including Newton-Raphson linearization and a simple upwind scheme do not always provide the required computational efficiency or sufficiently accurate resolution of the flow field. From a modelling perspective, closure schemes required in volume-averaged formulations, such as the generalized Leverett J function for capillary pressure, are specific to certain media (e.g. lithologic media) and are not valid for fibrous porous media, which are of central interest in fuel cells. This thesis presents a set of algorithms that are integrated efficiently to achieve computations that are more than two orders of magnitude faster compared to traditional techniques. The method uses an adaptive operator splitting method based on an a posteriori criterion to separate the flow from the transport equations which eliminates unnecessary and costly solution of the implicit pressure-velocity term at every time step; adaptive meshing to reduce the size of the discretized problem; efficient block preconditioned solver techniques for fast solution of the discrete equations; and a recently developed artificial diffusion strategy to stabilize the numerical solution of the transport equation. The significant improvements in accuracy and efficiency of the approach is demosntrated using numerical experiments in 2D and 3D. The method is also extended to advection-dominated problems to specifically investigate two-phase flow in heterogeneous porous media involving capillary transport. Both hydrophilic and hydrophobic media are considered, and insights relevant to fuel cell electrodes are discussed.
155

Thermal Analysis Of Eutectic Modified And Grain Refined Aluminum-silicon Alloys

Islamoglu, Erol Hamza 01 September 2005 (has links) (PDF)
A series of AlSi9Mg alloys were prepared and tested to reveal the effect of addition sequence and timing of grain refiner and eutectic modifier. AlSr10 master alloy was used as an modification reagent, and also for grain refiner AlTi5B master alloy was used. The depression at the eutectic temperature due to the addition of modifier and decrease in the amount of undercooling at the liquidus due to the presence of grain refiner were examined by the cooling curves which were obtained by the Alu-Therm instrument, which is the aluminum thermal analyzer of the Heraeus Electro-Nite. The alloys that were both modified and grain refined were subsequently poured as tensile test specimen shapes in permanent die casting mould for four times at 60 minutes time intervals, meanwhile thermal analysis of the alloys were also made. In this work the effect of grain refinement and modification agent, also the determination of the optimum time to pour after adding these agents were studied by aluminum thermal analyzer. The parameters obtained from this analyzer are compared with the microstructures / to see the effect of these agents on mechanical properties, hardness, tensile strength and percent elongation values were investigated. In this study the possibility of predicting the mechanical properties prior to casting by thermal analysis method was examined by regression analysis method. By this method relationship between thermal analysis parameters and mechanical properties was established.
156

Structure and thermoelectric transport properties of isoelectronically substituted (ZnO)5In2O3

Masuda, Yoshitake, Ohta, Mitsuru, Seo, Won-Seon, Pitschke, Wolfram, Koumoto, Kunihito, 増田, 佳丈, 河本, 邦仁 15 February 2000 (has links)
No description available.
157

Mechanical Properties of Bulk Nanocrystalline Austenitic Stainless Steels Produced by Equal Channel Angular Pressing

Gonzalez, Jeremy 2011 August 1900 (has links)
Bulk nanocrystalline 304L and 316L austenitic stainless steels (SS) were produced by equal channel angular pressing(ECAP) at elevated temperature. The average grain size achieved in 316L and 304 L SS is ~ 100 nm, and grain refinement occurs more rapid in 316 L SS than that in 304L. Also the structures are shown to retain a predominant austenite phase. Hardness increases by a factor of about 2.5 in both steels due largely to grain refinement and an introduction of a high density of dislocations. Tensile strength of nanocrystalline steels exceeds 1 GPa with good ductility in both systems. Mechanical properties of ECAPed 316L are also shown to have less dependence on strain rate than ECAPed 304L. ECAPed steels were shown to exhibit thermal stability up to 600oC as indicated by retention of high hardness in annealed specimens. Furthermore, there is an increased tolerance to radiation-induced hardening in the nanocrystalline equiaxed materials subjected to 100 keV He ions at an average dose of 3-4 displacement-per-atom level at room temperature. The large volume fraction of high angle grain boundaries may be vital for enhanced radiation tolerance. These nanocrystalline SSs show promise for further research in radiation resistant structural materials for next-generation nuclear reactor systems.
158

Progress-based verification and derivation of concurrent programs

Brijesh Dongol Unknown Date (has links)
Concurrent programs are known to be complicated because synchronisation is required amongst the processes in order to ensure safety (nothing bad ever happens) and progress (something good eventually happens). Due to possible interference from other processes, a straightforward rearrangement of statements within a process can lead to dramatic changes in the behaviour of a program, even if the behaviour of the process executing in isolation is unaltered. Verifying concurrent programs using informal arguments are usually unconvincing, which makes formal methods a necessity. However, formal proofs can be challenging due to the complexity of concurrent programs. Furthermore, safety and progress properties are proved using fundamentally different techniques. Within the literature, safety has been given considerably more attention than progress. One method of formally verifying a concurrent program is to develop the program, then perform a post-hoc verification using one of the many available frameworks. However, this approach tends to be optimistic because the developed program seldom satisfies its requirements. When a proof becomes difficult, it can be unclear whether the proof technique or the program itself is at fault. Furthermore, following any modifications to program code, a verification may need to be repeated from the beginning. An alternative approach is to develop a program using a verify-while-develop paradigm. Here, one starts with a simple program together with the safety and progress requirements that need to be established. Each derivation step consists of a verification, followed by introduction of new program code motivated using the proofs themselves. Because a program is developed side-by-side with its proof, the completed program satisfies the original requirements. Our point of departure for this thesis is the Feijen and van Gasteren method for deriving concurrent programs, which uses the logic of Owicki and Gries. Although Feijen and van Gasteren derive several concurrent programs, because the Owicki-Gries logic does not include a logic of progress, their derivations only consider safety properties formally. Progress is considered post-hoc to the derivation using informal arguments. Furthermore, rules on how programs may be modified have not been presented, i.e., a program may be arbitrarily modified and hence unspecified behaviours may be introduced. In this thesis, we develop a framework for developing concurrent programs in the verify-while-develop paradigm. Our framework incorporates linear temporal logic, LTL, and hence both safety and progress properties may be given full consideration. We examine foundational aspects of progress by formalising minimal progress, weak fairness and strong fairness, which allow scheduler assumptions to be described. We formally define progress terms such as individual progress, individual deadlock, liveness, etc (which are properties of blocking programs) and wait-, lock-, and obstruction-freedom (which are properties of non-blocking programs). Then, we explore the inter-relationships between the various terms under the different fairness assumptions. Because LTL is known to be difficult to work with directly, we incorporate the logic of Owicki-Gries (for proving safety) and the leads-to relation from UNITY (for proving progress) within our framework. Following the nomenclature of Feijen and van Gasteren, our techniques are kept calculational, which aids derivation. We prove soundness of our framework by proving theorems that relate our techniques to the LTL definitions. Furthermore, we introduce several methods for proving progress using a well-founded relation, which keeps proofs of progress scalable. During program derivation, in order to ensure unspecified behaviour is not introduced, it is also important to verify a refinement, i.e., show that every behaviour of the final (more complex) program is a possible behaviour of the abstract representation. To facilitate this, we introduce the concept of an enforced property, which is a property that the program code does not satisfy, but is required of the final program. Enforced properties may be any LTL formula, and hence may represent both safety and progress requirements. We formalise stepwise refinement of programs with enforced properties, so that code is introduced in a manner that satisfies the enforced properties, yet refinement of the original program is guaranteed. We present derivations of several concurrent programs from the literature.
159

Robust, refined and selective matching for accurate camera pose estimation / Sélection et raffinement de mises en correspondance robustes pour l'estimation de pose précise de caméras

Liu, Zhe 13 April 2015 (has links)
Grâce aux progrès récents en photogrammétrie, il est désormais possible de reconstruire automatiquement un modèle d'une scène 3D à partir de photographies ou d'une vidéo. La reconstruction est réalisée en plusieurs étapes. Tout d'abord, on détecte des traits saillants (features) dans chaque image, souvent des points mais plus généralement des régions. Puis on cherche à les mettre en correspondance entre images. On utilise ensuite les traits communs à deux images pour déterminer la pose (positions et orientations) relative des images. Puis les poses sont mises dans un même repère global et la position des traits saillants dans l'espace est reconstruite (structure from motion). Enfin, un modèle 3D dense de la scène peut être estimé. La détection de traits saillants, leur appariement, ainsi que l'estimation de la position des caméras, jouent des rôles primordiaux dans la chaîne de reconstruction 3D. Des imprécisions ou des erreurs dans ces étapes ont un impact majeur sur la précision et la robustesse de la reconstruction de la scène entière. Dans cette thèse, nous nous intéressons à l'amélioration des méthodes pour établir la correspondance entre régions caractéristiques et pour les sélectionner lors de l'estimation des poses de caméras, afin de rendre les résultats de reconstruction plus robustes et plus précis. Nous introduisons tout d'abord une contrainte photométrique pour une paire de correspondances (VLD) au sein d'une même image, qui est plus fiable que les contraintes purement géométriques. Puis, nous proposons une méthode semi-locale (K-VLD) pour la mise en correspondance, basée sur cette contrainte photométrique. Nous démontrons que notre méthode est très robuste pour des scènes rigides, mais aussi non-rigides ou répétitives, et qu'elle permet d'améliorer la robustesse et la précision de méthodes d'estimation de poses, notamment basées sur RANSAC. Puis, pour améliorer l'estimation de la position des caméras, nous analysons la précision des reconstructions et des estimations de pose en fonction du nombre et de la qualité des correspondances. Nous en dérivons une formule expérimentale caractérisant la relation ``qualité contre quantité''. Sur cette base, nous proposons une méthode pour sélectionner un sous-ensemble des correspondances de meilleure qualité de façon à obtenir une très haute précision en estimation de poses. Nous cherchons aussi raffiner la précision de localisation des points en correspondance. Pour cela, nous développons une extension de la méthode de mise en correspondance aux moindres carrés (LSM) en introduisant un échantillonnage irrégulier et une exploration des échelles d'images. Nous montrons que le raffinement et la sélection de correspondances agissent indépendamment pour améliorer la reconstruction. Combinées, les deux méthodes produisent des résultats encore meilleurs / With the recent progress in photogrammetry, it is now possible to automatically reconstruct a model of a 3D scene from pictures or videos. The model is reconstructed in several stages. First, salient features (often points, but more generally regions) are detected in each image. Second, features that are common in images pairs are matched. Third, matched features are used to estimate the relative pose (position and orientation) of images. The global poses are then computed as well as the 3D location of these features (structure from motion). Finally, a dense 3D model can be estimated. The detection of salient features, their matching, as well as the estimation of camera poses play a crucial role in the reconstruction process. Inaccuracies or errors in these stages have a major impact on the accuracy and robustness of reconstruction for the entire scene. In this thesis, we propose better methods for feature matching and feature selection, which improve the robustness and accuracy of existing methods for camera position estimation. We first introduce a photometric pairwise constraint for feature matches (VLD), which is more reliable than geometric constraints. Then we propose a semi-local matching approach (K-VLD) using this photometric match constraint. We show that our method is very robust, not only for rigid scenes but also for non-rigid and repetitive scenes, which can improve the robustness and accuracy of pose estimation methods, such as based on RANSAC. To improve the accuracy in camera position estimation, we study the accuracy of reconstruction and pose estimation in function of the number and quality of matches. We experimentally derive a “quantity vs. quality” relation. Using this relation, we propose a method to select a subset of good matches to produce highly accurate pose estimations. We also aim at refining match position. For this, we propose an improvement of least square matching (LSM) using an irregular sampling grid and image scale exploration. We show that match refinement and match selection independently improve the reconstruction results, and when combined together, the results are further improved
160

Estudo da influência do processo ECAP (Equal Channel Angular Pressing) nas propriedades mecânicas e características microestruturais do aço SAE 1020. / Study of influence of ECAP(Equal Channel Angular Pressing) process in mechanical properties and microstructures characteristics in Steel SAE 1020

Silva, Gilson Jr. 10 November 2017 (has links)
Submitted by GILSON SILVA JUNIOR null (gilson_feg@yahoo.com.br) on 2017-12-18T12:27:06Z No. of bitstreams: 1 Tese Doutorado Defesa - Versão Final.pdf: 8602731 bytes, checksum: 8f3cbfc632bdb7f8998d8a2a7aa87243 (MD5) / Approved for entry into archive by Pamella Benevides Gonçalves null (pamella@feg.unesp.br) on 2017-12-18T13:21:51Z (GMT) No. of bitstreams: 1 silvajunior_g_dr_guara.pdf: 8602731 bytes, checksum: 8f3cbfc632bdb7f8998d8a2a7aa87243 (MD5) / Made available in DSpace on 2017-12-18T13:21:51Z (GMT). No. of bitstreams: 1 silvajunior_g_dr_guara.pdf: 8602731 bytes, checksum: 8f3cbfc632bdb7f8998d8a2a7aa87243 (MD5) Previous issue date: 2017-11-10 / A obtenção de granulometria ultrafina em aços com baixo teor de carbono pode contribuir para ampliação de suas aplicações na indústria, devido as propriedades mecânicas superiores que podem ser alcançadas com o refinamento de grãos, tais como: resistência mecânica, dureza, e tenacidade. O processo conhecido como Equal Channel Angular Pressing (ECAP) induz deformações plásticas severas suficientes para alterar as características microestruturais dos metais reduzindo seu tamanho de grão, e consequentemente melhorando algumas propriedades mecânicas, sem alterar a composição química dos materiais, ao utilizar temperaturas abaixo do ponto de recristalização dos metais. Neste trabalho o processo ECAP foi conduzido com corpos de prova na temperatura de 550°C utilizando como material de estudo aço SAE 1020. Os corpos de prova foram separados em três grupos. No primeiro grupo as amostras não foram submetidas a nenhum tratamento térmico entre os passes, no segundo grupo foi aplicado tratamento de alívio de tensões após os passes, e por fim, no terceiro grupo e foi aplicado um tratamento de recozimento intercrítico após o primeiro passe. Ensaios mecânicos de tração, dureza e charpy foram realizados com objetivo de verificar a influência do processo ECAP no comportamento mecânico do aço. Com intuito de verificar as alterações microestruturais causadas pelo processo ECAP foram utilizadas as técnicas de microscopia óptica e eletrônica de varredura. O trabalho tem como objetivo principal induzir o refinamento dos grãos por meio do processo ECAP em matriz bipartida elaborada neste trabalho. Os resultados das análises microestruturais e dos ensaios mecânicos demonstraram que os tratamentos térmicos utilizados combinados ao processo ECAP influenciaram diretamente no comportamento do aço SAE 1020. Conforme o número de passes pela matriz ECAP ocorreu uma redução do tamanho dos grãos, assim como aumento do limite de resistência a tração e dureza do aço 1020. Com relação ao tratamento térmico de alívio de tensão, uma melhor combinação entre resistência mecânica e ductilidade foi encontrada. O tratamento de recozimento intercrítico foi suficiente para induzir a transformação de fases no aço SAE 1020, o qual proporcionou resultados positivo no que diz respeito a ductilidade e resistência mecânica. Por fim, a consistência das investigações da evolução microestrutural permitiu compreender os efeitos do ECAP no aço SAE 1020. / Ultrafine grained microstructures obtaining in low carbon steels may contribute to enlarge the application of this material in industry, due to superior properties that can be achieved, such as: mechanical strength, hardness, and toughness. The process known as Equal Channel Angular Pressing (ECAP) induces severe plastic deformation sufficient to modify metals microstructures characteristics, reducing its grains size, and consequently improve its mechanical properties without materials chemistry composition changes, under temperatures below to recrystallization point. At this work ECAP process was carried out with specimens at 550° C using steel SAE 1020 as material. The specimens were divided into three groups. The specimens in the first group none heat treatment was applied between and after ECAP passes, in the second group the specimens were submitted under stress relief heat treatment after ECAP passes, and in the third group, specimens were submitted under intercritical annealing after first pass. Mechanical tensile strength, hardness and charpy impact tests were used with aim to verify the ECAP influence in steel mechanical behavior. In order to verify microstructures evolution caused by ECAP were applied optical and scanning electron microscopy. The aim of this work is grain refining by means of ECAP process with two parts tool elaborated in this study. The microstructure analysis and mechanical tests results shown that the heat treatments applied, combined with ECAP process directly influenced on steel SAE 1020 behavior. According to the number of passes grains sizes were reduced, as well the ultima tensile strength and hardness were increased. In reference of stress relief heat treatment, better combination between mechanical strength and ductility was achieved. Intercritical annealing treatment was capable to induce phase transformation in steel SAE 1020, which provided positive results with respect to ductility and mechanical strength. In conclusion, the consistence of microstructure evolution investigation became possible to understand effects of ECAP in steel SAE 1020

Page generated in 0.1042 seconds