• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 45
  • 31
  • 13
  • 7
  • 5
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 136
  • 136
  • 22
  • 22
  • 18
  • 15
  • 13
  • 13
  • 12
  • 12
  • 10
  • 9
  • 9
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Dimensional analysis based CFD modelling for power transformers

Zhang, Xiang January 2017 (has links)
Reliable thermal modelling approaches are crucial to transformer thermal design and operation. The highest temperature in the winding, usually referred to as the hot-spot temperature, is of the greatest interest because the insulation paper at the hot-spot undergoes the severest thermal ageing, and determines the life expectancy of the transformer insulation. Therefore, the primary objective of transformer thermal design is to control the hot-spot temperature rise over the ambient temperature within certain limit. For liquid-immersed power transformers, the hot-spot temperature rise over the ambient temperature is controlled by the winding geometry, power loss distribution, liquid flow rate and liquid properties. In order to obtain universally applicable thermal modelling results, dimensional analysis is adopted in this PhD thesis to guide computational fluid dynamics (CFD) simulations for disc-type transformer windings in steady state and their experimental verification. The modelling work is split into two parts on oil forced and directed (OD) cooling modes and oil natural (ON) cooling modes. COMSOL software is used for the CFD simulation work For OD cooling modes, volumetric oil flow proportion in each horizontal cooling duct (Pfi) and pressure drop coefficient over the winding (Cpd) are found mainly controlled by the Reynolds number at the winding pass inlet (Re) and the ratio of horizontal duct height to vertical duct width. The correlations for Pfi and Cpd with the dimensionless controlling parameters are derived from CFD parametric sweeps and verified by experimental tests. The effects of different liquid types on the flow distribution and pressure drop are investigated using the correlations derived. Reverse flows at the bottom part of winding passes are shown by both CFD simulations and experimental measurements. The hot-spot factor, H, is interpreted as a dimensionless temperature at the hot-spot and the effects of operational conditions e.g. ambient temperature and loading level on H are analysed. For ON cooling modes, the flow is driven by buoyancy forces and hot-streak dynamics play a vital role in determining fluid flow and temperature distributions. The dimensionless liquid flow and temperature distributions and H are all found to be controlled by Re, Pr and Gr/Re2. An optimal design and operational regime in terms of obtaining the minimum H, is identified from CFD parametric sweeps, where the effects of buoyancy forces are balanced by the effects of inertial forces. Reverse flows are found at the top part of winding passes, opposite to the OD results. The total liquid flow rates of different liquids for the same winding geometry with the same power loss distribution in an ON cooling mode are determined and with these determined total liquid flow rates, the effects of different liquids on fluid flow and temperature distributions are investigated by CFD simulations. The CFD modelling work on disc-type transformer windings in steady state present in this PhD thesis is based on the dimensional analyses on the fluid flow and heat transfer in the windings. Therefore, the results obtained are universally applicable and of the simplest form as well. In addition, the dimensional analyses have provided insight into how the flow and temperature distribution patterns are controlled by the dimensionless controlling parameters, regardless of the transformer operational conditions and the coolant liquid types used.
32

Estudo de uma válvula L através de números adimensionais

Kuhn, Gabriel Cristiano 27 April 2016 (has links)
Submitted by Silvana Teresinha Dornelles Studzinski (sstudzinski) on 2016-07-14T12:51:11Z No. of bitstreams: 1 Gabriel Cristiano Kuhn_.pdf: 1527987 bytes, checksum: dd028ed9d9ded68e948299bcc1bfb746 (MD5) / Made available in DSpace on 2016-07-14T12:51:11Z (GMT). No. of bitstreams: 1 Gabriel Cristiano Kuhn_.pdf: 1527987 bytes, checksum: dd028ed9d9ded68e948299bcc1bfb746 (MD5) Previous issue date: 2016-04-27 / FAPERGS - Fundação de Amparo à Pesquisa do Estado do Rio Grande do Sul / Válvula L é um tubo em forma de L destinado a conduzir partículas sólidas entre dois reservatórios. Este dispositivo usa injeção de um fluido e a sua geometria para o controle da vazão dos sólidos. A aplicação deste tipo de válvula não mecânica se dá em processos que visam o transporte de partículas, como linhas de transporte pneumático e reatores de leito circulante. O objetivo deste trabalho é desenvolver uma correlação para a vazão mássica de sólidos através da análise dos números adimensionais, calculados com base em variáveis do processo, e dados experimentais. Com uma correlação mais precisa torna-se mais fácil o controle e o projeto de uma válvula L. Este estudo desconsidera a influência dos reatores, levando em conta apenas a influência da geometria da válvula, a variação da injeção de ar e as propriedades das partículas. A bancada de ensaios foi projetada com duas válvulas L (diâmetros de 34 e 70 mm) feitas de acrílico. Foram utilizadas esferas de vidro (diâmetro Sauter 0,8 mm, massa específica efetiva 1580 kg/m3, grupo D da classificação Geldart), conduzidas por ar comprimido. Aplicando-se o teorema de PI de Buckingham às variáveis importantes do processo, três números adimensionais foram obtidos. Após uma bateria de testes, estes números adimensionais foram calculados para várias condições de ensaios. Com base nos dados experimentais, obteve-se uma equação de ajuste e uma correlação para o fluxo de sólidos. Calculou-se seis correlações, porém é possível dizer que apenas três descrevem o processo, mesmo que com alguma incerteza. Para as válvulas de 34 mm foi possível observar a máxima taxa de sólidos, ou seja, qualquer incremento na vazão injetada resulta em uma diminuição do escoamento de sólidos. / L valve is a right angled, L shaped pipe applied to transfer solids between two vessels. The device uses gas injection and pipe geometry for controlling the flow of particulate solids. This kind of non-mechanical valve is used in processes as pneumatic transport lines and circulating fluidized beds. This study aims to develop a new correlation to the solids mass flow rate through dimensional analysis, experimental data and equation fitting. An accurately way to estimate the flow of solids makes easier the valve design and control. This study does not consider the influence of the reactors that an L valve connect, in other words, this approach is limited to the influence of L valve geometry, gas injection and particle properties. A test section was built, comprising two valve diameters (acrylic pipes of 34 and 70 mm). Glass beads will be used as solids (Sauter diameter 0.8 mm, bulk density 1580 kg/m3, group D of Geldart classification) conveyed by air. Dimensionless numbers were calculated (by Buckingham PI theorem) from the variables of the process, then an experimental program was done. Based on experimental data, π_1, π_2 and π_3 values were calculated for various test conditions. Based on the experimental data, an equation fit and a correlation to the solids mass flow rate were obtained. Six correlation were calculated but only three are able to describe the L valve process with a minimum accuracy. Maximum solids flow were achieved for 34 mm L valve, in other words, if aeration rate is increased beyond this point, solids flow decreases.
33

Modélisation du cycle de vie en préconception: Une méthode de modélisation et d'évaluation basée sur les analogies et les nombres sans dimensions

Coatanéa, Eric 12 October 2005 (has links) (PDF)
This thesis develops a paradigm for conceptual design based on the idea that dimensional analysis can improve the evaluation and comparison of concepts of solution during the conceptual design process. The conceptual design approach developed in this research is a combination of tasks which starts with the identification of the customer needs in a formalized manner is followed by the generation of design concepts taking into account the different phases of the physical life cycle and ends by the evaluation and adequacy analysis of the concepts of solution with the formalized needs.<br /><br />The General Design Theory (GDT) is used as the methodological basis of this work. Using the results of GDT, the research introduces a definition of the concept of function which is generic and not dedicated to a solution-based approach. Consequently the concept of function fulfils its intended objective of modelling the design problems at a general level. In addition to the concept of function, this thesis introduces a series of classifications based on generic concepts and rules aimed at generating concepts of solutions progressively. All these concepts are integrated into the developed metamodel framework. The metamodel provides a group of generic concepts associated with laws and mapped with a normalized functional vocabulary. The metamodel framework is an intermediate structure developed in order to provide guidance during the synthesis process and to meet the initial condition in order to transform the classification structure into a metric space. A metric space is a topological space with a unique metric. The transformation of the initial topological space into a metric space can be obtained when a series of conditions are verified. The first condition consists of clustering the concepts of solutions in order to underline the comparable aspects in each of them. This is done by using a set of dedicated rules. In addition three other fundamental conditions should be obtained. The metamodel framework ensures the first condition; an enhanced fundamental system of unit provides the second condition and a paradigm of separation of concept the third one. When all these three conditions are verified, it becomes possible to transform the design problems modelled by four types of generic variables into a series of dimensionless groups. This transformation process is achieved by using the Vashy-Buckingham theorem and the Butterfield's paradigm. The Butterfield's paradigm is used in order to select the minimum set of repeated variables which ensure the non-singularity of the metrization procedure. This transformation process ends with the creation of a machinery dedicated to the qualitative simulation of the concepts of solutions. The thesis ends with the study of practical cases.
34

Exploring Undergraduate Disciplinary Writing: Expectations and Evidence in Psychology and Chemistry

Moran, Katherine E. 07 May 2013 (has links)
Research in the area of academic writing has demonstrated that writing varies significantly across disciplines and among genres within disciplines. Two important approaches to studying diversity in disciplinary academic writing have been the genre-based approach and the corpus-based approach. Genre studies have considered the situatedness of writing tasks, including the larger sociocultural context of the discourse community (e.g., Berkenkotter & Huckin, 1995; Bhatia, 2004) as well as the move structure in specific genres like the research article (e.g., Swales, 1990, 2004). Corpus- based studies of disciplinary writing have focused more closely on the linguistic variation across registers, with the re-search article being the most widely studied register (e.g., Cortes, 2004; Gray, 2011). Studies of under-graduate writing in the disciplines have tended to focus on task classification (e.g., Braine, 1989; Horowitz, 1986a), literacy demands (e.g.,Carson, Chase, Gibson, & Hargrove, 1992), or student development (e.g., Carroll, 2002; Leki, 2007). The purpose of the present study is to build on these previous lines of research to explore undergraduate disciplinary writing from multiple perspectives in order to better prepare English language learners for the writing tasks they might encounter in their majors at a US university. Specifically, this exploratory study examines two disciplines: psychology and chemistry. Through writing task classification (following Horowitz, 1986), qualitative interviews with faculty and students in each discipline, and a corpus-based text analysis of course readings and upper-division student writing, the study yielded several important findings. With regard to writing tasks, psychology writing tasks showed more variety than chemistry. In addition, lower division classes had fewer writing assignments than upper division courses, particularly in psychology. The findings also showed a mismatch between the expectations of instructors in each discipline and students’ understanding of such writing expectations. The linguistic analysis of course readings and student writing demonstrated differences in language use both between registers and across disciplines.
35

Analytical, Numerical And Experimental Investigation Of The Distortion Behavior Of Steel Shafts During Through

Maradit, Betul Pelin 01 September 2010 (has links) (PDF)
Distortion (undesired dimension and shape changes) is one of the most important problems of through hardened steel components. During quenching, anisotropic dimensional changes are inevitable due to classical plasticity and transformation induced plasticity. Moreover / various distortion potential carriers are brought into material during production chain. This study consists of analytical, numerical and experimental investigations of quench distortion. In numerical and analytical part, sensitivity analysis of the quenching model, and dimensional analysis of distortion were conducted by utilizing experimentally verified simulations. In sensitivity analysis, effect of uncertainties in input data on simulation results were determined, whereas / in dimensional analysis, the influence of various dimensionless numbers that govern quench distortion were investigated. Throughout the study, gas-nozzle-field quenching of SAE52100 long shafts were simulated. Simulations were performed by commercial finite element analysis software, SYSWELD&reg / . Conceptual results indicate that the most important material properties and dimensionless numbers are the ones that govern volume change. Moreover, those that determine plasticity of austenite significantly affect isotropy of the dimensional changes. When unimportant dimensionless numbers are eliminated, there remain 14 dimensionless combinations that govern the problem. In experimental part of the study / effect of microstructure on distortion behavior of SAE52100 long cylinders with various diameters was investigated. In addition to gas-nozzle-field quenching, salt bath and high speed quenching experiments were performed. In regards to experimental findings, there is a correlation between distortions of long cylinders and machining position with respect to billet.
36

A method for modeling under-expanded jets

Day, Julia Katherine 23 April 2013 (has links)
In nuclear power plants, a pipe break in the cooling line releases a jet that damages other equipment in containment, and is known as a loss of coolant accident (LOCA). This report specifically focuses on boiling water reactor (BWR) applications as a guide for future studies with pressurized water reactors (PWRs). This report presents a methodology for characterizing the jet such that, given a set of upstream conditions, the pressure field and damage potential of the jet can be predicted by an end user with a minimum of computation. The resultant model has many advantages over previous models in that it is easily calculated with knowledge readily available to plant operators and it provides new metrics that allow for a quick and intuitive understanding of the damage potential of the jet. / text
37

Analysis of skeletal and dental changes with a tooth-borne and a bone-borne maxillary expansion appliance assessed through digital volumetric imaging

Lagravere Vich, Manuel Oscar Unknown Date
No description available.
38

Extending low-rank matrix factorizations for emerging applications

Zhou, Ke 13 January 2014 (has links)
Low-rank matrix factorizations have become increasingly popular to project high dimensional data into latent spaces with small dimensions in order to obtain better understandings of the data and thus more accurate predictions. In particular, they have been widely applied to important applications such as collaborative filtering and social network analysis. In this thesis, I investigate the applications and extensions of the ideas of the low-rank matrix factorization to solve several practically important problems arise from collaborative filtering and social network analysis. A key challenge in recommendation system research is how to effectively profile new users, a problem generally known as \emph{cold-start} recommendation. In the first part of this work, we extend the low-rank matrix factorization by allowing the latent factors to have more complex structures --- decision trees to solve the problem of cold-start recommendations. In particular, we present \emph{functional matrix factorization} (fMF), a novel cold-start recommendation method that solves the problem of adaptive interview construction based on low-rank matrix factorizations. The second part of this work considers the efficiency problem of making recommendations in the context of large user and item spaces. Specifically, we address the problem through learning binary codes for collaborative filtering, which can be viewed as restricting the latent factors in low-rank matrix factorizations to be binary vectors that represent the binary codes for both users and items. In the third part of this work, we investigate the applications of low-rank matrix factorizations in the context of social network analysis. Specifically, we propose a convex optimization approach to discover the hidden network of social influence with low-rank and sparse structure by modeling the recurrent events at different individuals as multi-dimensional Hawkes processes, emphasizing the mutual-excitation nature of the dynamics of event occurrences. The proposed framework combines the estimation of mutually exciting process and the low-rank matrix factorization in a principled manner. In the fourth part of this work, we estimate the triggering kernels for the Hawkes process. In particular, we focus on estimating the triggering kernels from an infinite dimensional functional space with the Euler Lagrange equation, which can be viewed as applying the idea of low-rank factorizations in the functional space.
39

Analysis of skeletal and dental changes with a tooth-borne and a bone-borne maxillary expansion appliance assessed through digital volumetric imaging

Lagravere Vich, Manuel Oscar 11 1900 (has links)
The purpose of this research was to compare skeletal and dental changes assessed by digital volumetric images produced during and after rapid maxillary expansion (RME) between a bone-borne anchored expansion appliance and a conventional tooth-borne RME. Initial steps included the development of a methodology to analyze CBCT images. Reliability of traditional two dimensional (2D) cephalometric landmarks identified in CBCT images was explored, and new landmarks identifiable on the CBCT images were also evaluated. This methodology was later tested through a clinical trial with 62 patients where skeletal and dental changes found after maxillary expansion using either a bone-borne or tooth-borne maxillary expander and compared to a non-treated control group. The conclusions that were obtained from this thesis were that the NewTom 9” and 12” three dimensional (3D) images present a 1-to-1 ratio with real coordinates, linear and angular distances obtained by a coordinate measurement machine (CMM). Landmark intra- and inter-reliability (ICC) was high for all CBCT landmarks and for most of the 2D lateral cephalometric landmarks. Foramen Spinosum, foramen Ovale, foramen Rotundum and the Hypoglossal canal all provided excellent intra-observer reliability and accuracy. Midpoint between both foramen Spinosums (ELSA) presented a high intra-reliability and is an adequate landmark to be used as a reference point in 3D cephalometric analysis. ELSA, both AEM and DFM points presented a high intra-reliability when located on 3D images. Minor variations in location of these landmarks produced unacceptable uncertainty in coordinate system alignment. The potential error associated with location of distant landmarks is unacceptable for analysis of growth and treatment changes. Thus, an alternative is the use of vectors. Selection of landmarks for use in 3D image analysis should follow certain characteristics and modifications in their definitions should be applied. When measuring 3D maxillary complex structural changes during maxillary expansion treatments using CBCT, both tooth-anchored and bone-anchored expanders presented similar results. The greatest changes occurred in the transverse dimension while changes in the vertical and antero-posterior dimension were negligible. Dental expansion was also greater than skeletal expansion. Bone-anchored maxillary expanders can be considered as an alternative choice for tooth-anchored maxillary expanders. / Medical Sciences in Orthodontics
40

Isometry and convexity in dimensionality reduction

Vasiloglou, Nikolaos 30 March 2009 (has links)
The size of data generated every year follows an exponential growth. The number of data points as well as the dimensions have increased dramatically the past 15 years. The gap between the demand from the industry in data processing and the solutions provided by the machine learning community is increasing. Despite the growth in memory and computational power, advanced statistical processing on the order of gigabytes is beyond any possibility. Most sophisticated Machine Learning algorithms require at least quadratic complexity. With the current computer model architecture, algorithms with higher complexity than linear O(N) or O(N logN) are not considered practical. Dimensionality reduction is a challenging problem in machine learning. Often data represented as multidimensional points happen to have high dimensionality. It turns out that the information they carry can be expressed with much less dimensions. Moreover the reduced dimensions of the data can have better interpretability than the original ones. There is a great variety of dimensionality reduction algorithms under the theory of Manifold Learning. Most of the methods such as Isomap, Local Linear Embedding, Local Tangent Space Alignment, Diffusion Maps etc. have been extensively studied under the framework of Kernel Principal Component Analysis (KPCA). In this dissertation we study two current state of the art dimensionality reduction methods, Maximum Variance Unfolding (MVU) and Non-Negative Matrix Factorization (NMF). These two dimensionality reduction methods do not fit under the umbrella of Kernel PCA. MVU is cast as a Semidefinite Program, a modern convex nonlinear optimization algorithm, that offers more flexibility and power compared to iv KPCA. Although MVU and NMF seem to be two disconnected problems, we show that there is a connection between them. Both are special cases of a general nonlinear factorization algorithm that we developed. Two aspects of the algorithms are of particular interest: computational complexity and interpretability. In other words computational complexity answers the question of how fast we can find the best solution of MVU/NMF for large data volumes. Since we are dealing with optimization programs, we need to find the global optimum. Global optimum is strongly connected with the convexity of the problem. Interpretability is strongly connected with local isometry1 that gives meaning in relationships between data points. Another aspect of interpretability is association of data with labeled information. The contributions of this thesis are the following: 1. MVU is modified so that it can scale more efficient. Results are shown on 1 million speech datasets. Limitations of the method are highlighted. 2. An algorithm for fast computations for the furthest neighbors is presented for the first time in the literature. 3. Construction of optimal kernels for Kernel Density Estimation with modern convex programming is presented. For the first time we show that the Leave One Cross Validation (LOOCV) function is quasi-concave. 4. For the first time NMF is formulated as a convex optimization problem 5. An algorithm for the problem of Completely Positive Matrix Factorization is presented. 6. A hybrid algorithm of MVU and NMF the isoNMF is presented combining advantages of both methods. 7. The Isometric Separation Maps (ISM) a variation of MVU that contains classification information is presented. 8. Large scale nonlinear dimensional analysis on the TIMIT speech database is performed. 9. A general nonlinear factorization algorithm is presented based on sequential convex programming. Despite the efforts to scale the proposed methods up to 1 million data points in reasonable time, the gap between the industrial demand and the current state of the art is still orders of magnitude wide.

Page generated in 0.0857 seconds