• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 19
  • 5
  • 3
  • 2
  • 2
  • 1
  • Tagged with
  • 40
  • 40
  • 8
  • 6
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Numerical Study of Coherent Structures within a legacy LES code and development of a new parallel Frame Work for their computation.

Giammanco, Raimondo R 22 December 2005 (has links)
The understanding of the physics of the Coherent Structures and their interaction with the remaining fluid motions is of paramount interest in Turbulence Research. Indeed, recently had been suggested that separating and understanding the the different physical behavior of Coherent Structures and "uncoherent" background might very well be the key to understand and predict Turbulence. Available understanding of Coherent Structures shows that their size is considerably larger than the turbulent macro-scale, making permissible the application of Large Eddy Simulation to their simulation and study, with the advantage to be able to study their behavior at higher Re and more complex geometry than a Direct Numerical Simulation would normally allow. Original purpose of the present work was therefore the validation of the use of Large Eddy Simulation for the study of Coherent Structures in Shear-Layer and the its application to different flow cases to study the effect of the flow topology on the Coherent Structures nature. However, during the investigation of the presence of Coherent Structures in numerically generated LES flow fields, the aging in house Large Eddy Simulation (LES) code of the Environmental & Applied Fluid Dynamics Department has shown a series of limitations and shortcomings that led to the decision of relegating it to the status of Legacy Code (from now on indicated as VKI LES legacy code and of discontinuing its development. A new natively parallel LES solver has then been developed in the VKI Environmental & Applied Fluid Dynamics Department, where all the shortcomings of the legacy code have been addressed and modern software technologies have been adopted both for the solver and the surrounding infrastructure, delivering a complete framework based exclusively on Free and Open Source Software (FOSS ) to maximize portability and avoid any dependency from commercial products. The new parallel LES solver retains some basic characteristics of the old legacy code to provide continuity with the past (Finite Differences, Staggered Grid arrangement, Multi Domain technique, grid conformity across domains), but improve in almost all the remaining aspects: the flow can now have all the three directions of inhomogeneity, against the only two of the past, the pressure equation can be solved using a three point stencil for improved accuracy, and the viscous terms and convective terms can be computed using the Computer Algebra System Maxima, to derive discretized formulas in an automatic way. For the convective terms, High Resolution Central Schemes have been adapted to the three-dimensional Staggered Grid Arrangement from a collocated bi-dimensional one, and a system of Master-Slave simulations has been developed to run in parallel a Slave simulation (on 1 Processing Element) for generating the inlet data for the Master simulation (n - 1 Processing Elements). The code can perform Automatic Run-Time Load Balancing, Domain Auto-Partitioning, has embedded documentation (doxygen), has a CVS repository (version managing) for ease of use of new and old developers. As part of the new Frame Work, a set of Visual Programs have been provided for IBM Open Data eXplorer (OpenDX), a powerful FOSS Flow visualization and analysis tool, aimed as a replacement for the commercial TecplotTM, and a bug tracking mechanism via Bugzilla and cooperative forum resources (phpBB) for developers and users alike. The new M.i.O.m.a. (MiOma) Solver is ready to be used again for Coherent Structures analysis in the near future.
2

Substrate Resistance Extraction Using a Multi-Domain Surface Integral Formulation

Vithayathil, Anne, Hu, Xin, White, Jacob K. 01 1900 (has links)
In order to assess and optimize layout strategies for minimizing substrate noise, it is necessary to have fast and accurate techniques for computing contact coupling resistances associated with the substrate. In this talk, we describe an extraction method capable of full-chip analysis which combines modest geometric approximations, a novel integral formulation, and an FFT-accelerated preconditioned iterative method. / Singapore-MIT Alliance (SMA)
3

Numerial modelling based on the multiscale homogenization theory. Application in composite materials and structures

Badillo Almaraz, Hiram 16 April 2012 (has links)
A multi-domain homogenization method is proposed and developed in this thesis based on a two-scale technique. The method is capable of analyzing composite structures with several periodic distributions by partitioning the entire domain of the composite into substructures making use of the classical homogenization theory following a first-order standard continuum mechanics formulation. The need to develop the multi-domain homogenization method arose because current homogenization methods are based on the assumption that the entire domain of the composite is represented by one periodic or quasi-periodic distribution. However, in some cases the structure or composite may be formed by more than one type of periodic domain distribution, making the existing homogenization techniques not suitable to analyze this type of cases in which more than one recurrent configuration appears. The theoretical principles used in the multi-domain homogenization method were applied to assemble a computational tool based on two nested boundary value problems represented by a finite element code in two scales: a) one global scale, which treats the composite as an homogeneous material and deals with the boundary conditions, the loads applied and the different periodic (or quasi-periodic) subdomains that may exist in the composite; and b) one local scale, which obtains the homogenized response of the representative volume element or unit cell, that deals with the geometry distribution and with the material properties of the constituents. The method is based on the local periodicity hypothesis arising from the periodicity of the internal structure of the composite. The numerical implementation of the restrictions on the displacements and forces corresponding to the degrees of freedom of the domain's boundary derived from the periodicity was performed by means of the Lagrange multipliers method. The formulation included a method to compute the homogenized non-linear tangent constitutive tensor once the threshold of nonlinearity of any of the unit cells has been surpassed. The procedure is based in performing a numerical derivation applying a perturbation technique. The tangent constitutive tensor is computed for each load increment and for each iteration of the analysis once the structure has entered in the non-linear range. The perturbation method was applied at the global and local scales in order to analyze the performance of the method at both scales. A simple average method of the constitutive tensors of the elements of the cell was also explored for comparison purposes. A parallelization process was implemented on the multi-domain homogenization method in order to speed-up the computational process due to the huge computational cost that the nested incremental-iterative solution embraces. The effect of softening in two-scale homogenization was investigated following a smeared cracked approach. Mesh objectivity was discussed first within the classical one-scale FE formulation and then the concepts exposed were extrapolated into the two-scale homogenization framework. The importance of the element characteristic length in a multi-scale analysis was highlighted in the computation of the specific dissipated energy when strain-softening occurs. Various examples were presented to evaluate and explore the capabilities of the computational approach developed in this research. Several aspects were studied, such as analyzing different composite arrangements that include different types of materials, composites that present softening after the yield point is reached (e.g. damage and plasticity) and composites with zones that present high strain gradients. The examples were carried out in composites with one and with several periodic domains using different unit cell configurations. The examples are compared to benchmark solutions obtained with the classical one-scale FE method. / En esta tesis se propone y desarrolla un método de homogeneización multi-dominio basado en una técnica en dos escalas. El método es capaz de analizar estructuras de materiales compuestos con varias distribuciones periódicas dentro de un mismo continuo mediante la partición de todo el dominio del material compuesto en subestructuras utilizando la teoría clásica de homogeneización a través de una formulación estándar de mecánica de medios continuos de primer orden. La necesidad de desarrollar este método multi-dominio surgió porque los métodos actuales de homogeneización se basan en el supuesto de que todo el dominio del material está representado por solo una distribución periódica o cuasi-periódica. Sin embargo, en algunos casos, la estructura puede estar formada por más de un tipo de distribución de dominio periódico. Los principios teóricos desarrollados en el método de homogeneización multi-dominio se aplicaron para ensamblar una herramienta computacional basada en dos problemas de valores de contorno anidados, los cuales son representados por un código de elementos finitos (FE) en dos escalas: a) una escala global, que trata el material compuesto como un material homogéneo. Esta escala se ocupa de las condiciones de contorno, las cargas aplicadas y los diferentes subdominios periódicos (o cuasi-periódicos) que puedan existir en el material compuesto; y b) una escala local, que obtiene la respuesta homogenizada de un volumen representativo o celda unitaria. Esta escala se ocupa de la geometría, y de la distribución espacial de los constituyentes del compuesto así como de sus propiedades constitutivas. El método se basa en la hipótesis de periodicidad local derivada de la periodicidad de la estructura interna del material. La implementación numérica de las restricciones de los desplazamientos y las fuerzas derivadas de la periodicidad se realizaron por medio del método de multiplicadores de Lagrange. La formulación incluye un método para calcular el tensor constitutivo tangente no-lineal homogeneizado una vez que el umbral de la no-linealidad de cualquiera de las celdas unitarias ha sido superado. El procedimiento se basa en llevar a cabo una derivación numérica aplicando una técnica de perturbación. El tensor constitutivo tangente se calcula para cada incremento de carga y para cada iteración del análisis una vez que la estructura ha entrado en el rango no-lineal. El método de perturbación se aplicó tanto en la escala global como en la local con el fin de analizar la efectividad del método en ambas escalas. Se lleva a cabo un proceso de paralelización en el método con el fin de acelerar el proceso de cómputo debido al enorme coste computacional que requiere la solución iterativa incremental anidada. Se investiga el efecto de ablandamiento por deformación en el material usando el método de homogeneización en dos escalas a través de un enfoque de fractura discreta. Se estudió la objetividad en el mallado dentro de la formulación clásica de FE en una escala y luego los conceptos expuestos se extrapolaron en el marco de la homogeneización de dos escalas. Se enfatiza la importancia de la longitud característica del elemento en un análisis multi-escala en el cálculo de la energía específica disipada cuando se produce el efecto de ablandamiento. Se presentan varios ejemplos para evaluar la propuesta computacional desarrollada en esta investigación. Se estudiaron diferentes configuraciones de compuestos que incluyen diferentes tipos de materiales, así como compuestos que presentan ablandamiento después de que el punto de fluencia del material se alcanza (usando daño y plasticidad) y compuestos con zonas que presentan altos gradientes de deformación. Los ejemplos se llevaron a cabo en materiales compuestos con uno y con varios dominios periódicos utilizando diferentes configuraciones de células unitarias. Los ejemplos se comparan con soluciones de referencia obtenidas con el método clásico de elementos finitos en una escala.
4

The study and fabrication of liquid crystal alignment using dimenthylsioxane

Yang, Lu-hsiang 23 July 2007 (has links)
Vertical alignment liquid crystal display has the advantages of wide view angle, high contrast and good response time. Today vertical alignment liquid crystal display get flourishing development, and Multi-Domain Vertical Alignment (MVA) technology is progressive. In this report we would find an alignment materiel which could be fabricated easily, low cost and good E-O characteristic. The use of dimenthysiloxane (PDMS) in fabrication of microfluidic channels technology in biotechnology has the advantages of characteristics foregoing. In this study PDMS was used as LC vertical alignment materiel. We used the MVA structure and ASV structure to align LC direction because PDMS cannot be rubbed. In experiment we found that PDMS exhibited different surface energies when it was baked in different temperature. The results of measuring the pre-tilt angle in the different surface energy conditions are similar. ASV LC samples were fabricated using PDMS alignment layer. MVA LC cells were made using high viscosity PDMS. We found the characteristic of paper white at CIE chromaticity diagram in the ASV LC sample without any color filter and color correction technology.
5

Study of Multi-domain Vertical Alignment Flexible Liquid Crystal Display

Kuo, Chien-Ting 15 July 2009 (has links)
Multi-domain Vertical Alignment Flexible Liquid Crystal Display based on photolithography and replica-Molding method has been demonstrated. In order to maintain a uniform cell gap between flexible substrate,the microstructures were fabricated with polydimethylsiloxane (PDMS) material by replica-molding method. The microstructures master were designed and fabricated using a photosensitive resin (SU-8) by photolithography. The microstructures of pixel-encapsulated walls enhance the mechanical strength to prevent the liquid crystal molecules flow in the bend state deformations. Besides, the elastomeric material, PDMS, provide weak surface energy and induce vertical alignment for liquid crystal spontanelusly without any surface treatment. The microstructure protrusions made by PDMS can provide multi-domain vertical alignment (MVA) effect with wide viewing angle and high contrast ratio. Therefore, this method could be implemented for achieving multi-domain vertical alignment on a flexible LCD applications. The flexible LCD have great stability reproducibility, durability and good electro-optical performances.
6

MULTI-DOMAIN, MULTI-OBJECTIVE-OPTIMIZATION-BASED APPROACH TO THE DESIGN OF CONTROLLERS FOR POWER ELECTRONICS

Shang, Jing 01 January 2014 (has links)
Power converter has played a very important role in modern electric power systems. The control of power converters is necessary to achieve high performance. In this study, a dc-dc buck converter is studied. The parameters of a notional proportional-integral controller are to be selected. Genetic algorithms (GAs), which have been widely used to solve multi-objective optimization problems, is used in order to locate appropriate controller design. The control metrics are specified as phase margin in frequency domain and voltage error in time-domain. GAs presented the optimal tradeoffs between these two objectives. Three candidate control designs are studied in simulation and experimentally. There is some agreement between the experimental results and the simulation results, but there are also some discrepancies due to model error. Overall, the use of multi-domain, multi-objective-optimization-based approach has proven feasible.
7

Stability of magnetic remanence in multidomain magnetite

Muxworthy, Adrian R. January 1998 (has links)
If a rock is to retain a geologically meaningful magnetic record of its history, it is essential that it contains magnetic minerals which are capable of carrying stable magnetic remanence. Of the natural occurring magnetic minerals, magnetite is the most important because of its abundance and strong magnetic signature. The stability, i.e., the resistance to demagnetisation or reorientation, of magnetic remanence is related to grain size; in smaller grains the magnetic moments align to have single domain (SD) structures, in larger grains complex magnetic patterns are formed (multidomain (MD)). “Classical” domain theory predicts that SD remanence is stable, whilst MD remanence is not. However experimental evidence has shown that both SD and MD grains can have stable remanences. In this thesis the origin of stable MD remanence is examined. There are two opposing theories; one suggests that the stability is due to independent SD-like structures, the other postulates that the stability is due to metastable MD structure. A series of experiments were designed to examine the stability using a selection of characterised synthetic and natural samples. Low-stress hydrothermal recrystallised samples where grown for this study. For the first time, the stability of thermoremanence induced in hydrothermal crystals to cooling was examined. The results agree with previous observations for crushed and natural magnetites, and support kinematic models. The behaviour of SIRM and thermoremanences in MD magnetite to low-temperature cooling to below the crystallographic Verwey transition at 120-124 K (T<sub>v</sub>) and the cubic magnetocrystalline anisotropy isotropic point (T<sub>k</sub>) at 130 K was investigated. On cooling through T<sub>v</sub>, SIRM was observed to decrease and demagnetise, however thermoremanence was found to display a large increase in the magnetisation at T<sub>v</sub>, which was partially re- versible on warming. The size of the anomaly is shown to be dependent on the temperature at which the thermoremanence is acquired, internal stress and grain size. The anomaly is attributed to the large increase in the magnetocrystalline anisotropy which occurs on cooling through T<sub>v</sub> . It is postulated that low-temperature cycling demagnetisation is due to kinematic processes which occur on cooling between room temperature and T<sub>k</sub>. Characterisation of low-temperature treated remanence and partially alternating field demagnetised remanence, suggest that the stable remanence is multidomain. Low-temperature cooling of remanence in single sub-micron crystals was simulated using micromagnetic models. The models predict the observed anomaly for thermoremanence on cooling through T<sub>v</sub>, and also the relative behaviour of SIRM and thermoremanence. The single domain threshold was calculated for the low-temperature phase of magnetite, and was found to be 0.14 microns, compared to 0.07 microns at room temperature.
8

Structure, Stability and Evolution of Multi-Domain Proteins

Bhaskara, Ramachandra M January 2013 (has links) (PDF)
Analyses of protein sequences from diverse genomes have revealed the ubiquitous nature of multi-domain proteins. They form up to 70% of proteomes of most eukaryotic organisms. Yet, our understanding of protein structure, folding and evolution has been dominated by extensive studies on single-domain proteins. We provide quantitative treatment and proof for prevailing intuitive ideas on the strategies employed by nature to stabilize otherwise unstable domains. We find that domains incapable of independent stability are stabilized by favourable interactions with tethered domains in the multi-domain context. Natural variations (nsSNPs) at these sites alter communication between domains and affect stability leading to disease manifestation. We emphasize this by using explicit all-atom molecular dynamics simulations to study the interface nsSNPs of human Glutathione S-transferase omega 1. We show that domain-domain interface interactions constrain inter-domain geometry (IDG) which is evolutionarily well conserved. The inter-domain linkers modulate the interactions by varying their lengths, conformations and local structure, thereby affecting the overall IDG. These findings led to the development of a method to predict interfacial residues in multi-domain proteins based on difference evolutionary information extracted from at least two diverse domain architectures (single and multi-domain). Our predictions are highly accurate (∼85%) and specific (∼95%). Using predicted residues to constrain domain–domain interaction, rigid-body docking was able to provide us with accurate full-length protein structures with correct orientation of domains. Further, we developed and employed an alignment-free approach based on local amino-acid fragment matching to compare sequences of multi-domain proteins. This is especially effective in the absence of proper alignments, which is usually the case for multi-domain proteins. Using this, we were able to recreate the existing Hanks and Hunter classification scheme for protein kinases. We also showed functional relationships among Immunoglobulin sequences. The clusters obtained were functionally distinct and also showed unique domain-architectures. Our analysis provides guidelines toward rational protein and interaction design which have attractive applications in obtaining stable fragments and domain constructs essential for structural studies by crystallography and NMR. These studies enable a deeper understanding of rapport of protein domains in the multi-domain context.
9

Kinematic Analysis, Numerical Modeling, and Design Optimization of Helical External Gear Pumps

Xinran Zhao (5930489) 16 January 2020 (has links)
<p>With their advantages of low-cost, high-reliability and simplicity, external gear pumps (EGPs) are popular choices in many applications, such as mobile hydraulic control system, fuel injection, and liquid transportation system, to name a few. Like other positive displacement machines, EGPs are characterized by a flow non-uniformity, which is given by the gear meshing and results in vibrations and noises. With increasing demands for low-noise components required by modern fluid-power systems, new designs of external gear machines with less noise emission and lower pulsation production are highly desired by the industry. </p><p><br></p><p>To satisfy these demands, there are several new-generation gear pump designs that have been realized by the industry and already commercialized. However, the research from both academia on external gear pumps are still primarily focused traditional involute gear pumps, while state-of-the-art research on these new-generation external gear pumps are highly lacked. Also for the most novel designs recently released to the market, their designs still have large margin to improve, as some of the physics inside these gear machines are not well understood and formulated. The goal of this research is to fill in this gap, by gain understanding of the relations between design features and actual flow generated by such novel designs, and provide general methods of analysis and design for efficient and silent units. </p><p><br></p><p>To achieve this goal, this PhD dissertation presents a comprehensive approach of analysis for external gear pumps, with the emphasis on the new-generation helical gear pumps. The discussion covers a large variety of aspects for gear pump design and analysis, including: the analysis on the gear profile design and meshing, the displacement-chamber geometric modeling, and the kinematic-flow analysis. They are followed by a dynamic simulation model covering the dynamics of fluids, forces, and micro-motions, together with simulation results that provides the insights into the physics of new-generation gear machines. Multiple experimental results are provided, which show the validity of the simulation models by matching the pressure ripple measurement and the volumetric efficiencies. Furthermore, a linearized analysis on the ripple source of gear pumps are described, in order to provide the connection and understanding of the pump-generated ripple to the higher-level system analysis, which is also missing from the past academia research. In addition, the some of the models are utilized in optimization studies. These optimization results show the potentials of using the proposed approach of analysis to improve the existing designs as well as development of more efficient and silent units.</p><div><br></div>
10

SECURE MIDDLEWARE FOR FEDERATED NETWORK PERFORMANCE MONITORING

Kulkarni, Shweta Samir 06 August 2013 (has links)
No description available.

Page generated in 0.0426 seconds