561 |
Maximum-likelihood kernel density estimation in high-dimensional feature spaces /| C.M. van der WaltVan der Walt, Christiaan Maarten January 2014 (has links)
With the advent of the internet and advances in computing power, the collection of very large high-dimensional datasets has become feasible { understanding and modelling high-dimensional data has thus become a crucial activity, especially in the field of pattern recognition. Since non-parametric density estimators are data-driven and do not require or impose a pre-defined probability density function on data, they are very powerful tools for probabilistic data modelling and analysis. Conventional non-parametric density estimation methods, however, originated from the field of statistics and were not originally intended to perform density estimation in high-dimensional features spaces { as is often encountered in real-world pattern recognition tasks. Therefore we address the fundamental problem of non-parametric density estimation in high-dimensional feature spaces in this study. Recent advances in maximum-likelihood (ML) kernel density estimation have shown that kernel density estimators hold much promise for estimating nonparametric probability density functions in high-dimensional feature spaces. We therefore derive two new iterative kernel bandwidth estimators from the maximum-likelihood (ML) leave one-out objective function and also introduce a new non-iterative kernel bandwidth estimator (based on the theoretical bounds of the ML bandwidths) for the purpose of bandwidth initialisation. We name the iterative kernel bandwidth estimators the minimum leave-one-out entropy (MLE) and global MLE estimators, and name the non-iterative kernel bandwidth estimator the MLE rule-of-thumb estimator. We compare the performance of the MLE rule-of-thumb estimator and conventional kernel density estimators on artificial data with data properties that are varied in a controlled fashion and on a number of representative real-world pattern recognition tasks, to gain a better understanding of the behaviour of these estimators in high-dimensional spaces and to determine whether these estimators are suitable for initialising the bandwidths of iterative ML bandwidth estimators in high dimensions. We find that there are several regularities in the relative performance of conventional kernel density estimators across different tasks and dimensionalities and that the Silverman rule-of-thumb bandwidth estimator performs reliably across most tasks and dimensionalities of the pattern recognition datasets considered, even in high-dimensional feature spaces. Based on this empirical evidence and the intuitive theoretical motivation that the Silverman estimator optimises the asymptotic mean integrated squared error (assuming a Gaussian reference distribution), we select this estimator to initialise the bandwidths of the iterative ML kernel bandwidth estimators compared in our simulation studies. We then perform a comparative simulation study of the newly introduced iterative MLE estimators and other state-of-the-art iterative ML estimators on a number of artificial and real-world high-dimensional pattern recognition tasks. We illustrate with artificial data (guided by theoretical motivations) under what conditions certain estimators should be preferred and we empirically confirm on real-world data that no estimator performs optimally on all tasks and that the optimal estimator depends on the properties of the underlying density function being estimated. We also observe an interesting case of the bias-variance trade-off where ML estimators with fewer parameters than the MLE estimator perform exceptionally well on a wide variety of tasks; however, for the cases where these estimators do not perform well, the MLE estimator generally performs well. The newly introduced MLE kernel bandwidth estimators prove to be a useful contribution to the field of pattern recognition, since they perform optimally on a number of real-world pattern recognition tasks investigated and provide researchers and
practitioners with two alternative estimators to employ for the task of kernel density
estimation. / PhD (Information Technology), North-West University, Vaal Triangle Campus, 2014
|
562 |
Density dynamics: a holistic understanding of high density environmentsAbraham, Jose P. January 1900 (has links)
Master of Regional and Community Planning / Department of Landscape Architecture/Regional and Community Planning / Jason Brody / Today, achieving higher residential densities is an integral part of most discussions on concepts such as sustainability, placemaking, smart growth and new urbanism. It is argued that high density environments can potentially improve quality of life through a range of social benefits. In attempting to achieve these benefits, often times, developments that provide more than a certain number of dwelling units are considered desirable and successful high-density developments. However, understanding high residential density merely in terms of an increase in the number of dwelling units over an area of development might not help realize meaningful social benefits; in fact it could result in problems such as parking constraints, increased vehicular traffic, crowding, and eventually abandonment. This implies a dilemma of understanding high density environments holistically.
Using literature review and design exploration as two key research methods, this project aims at resolving this dilemma by presenting a holistic understanding of desirable high-density environments. The research works on the idea that high densities are a matter of design and performance. Through synthesis of literature review and explorative design findings, this research focuses on the qualitative aspects of high density environments that make them meaningful and desirable.
Through synthesis of literature review and design findings, the research finds that desirable high density environments should (a) Be Physically Compact; (b) Support Urbanity; and (c) Offer Livability and Sense of Place. These three qualitative aspects of high density environments are critical in determining how well such environments perform. The research further proposes eight meaningful goals and seventeen specific guidelines to achieve aforementioned three qualities that influence the performance of high density developments. In addition to these principles and guidelines, opportunities and challenges posed by explorative design exercises also allows identifying certain supplementary guidelines necessary to strengthen the framework. Together, these findings result in a theoretical framework that may be used as an effective design and evaluation tool in considering high density environments. This framework is named “Density Dynamics” to signify various morphological and socio-economic dynamics involved in a holistic understanding of high density environments.
|
563 |
Automatic Random Variate Generation for Simulation InputHörmann, Wolfgang, Leydold, Josef January 2000 (has links) (PDF)
We develop and evaluate algorithms for generating random variates for simulation input. One group called automatic, or black-box algorithms can be used to sample from distributions with known density. They are based on the rejection principle. The hat function is generated automatically in a setup step using the idea of transformed density rejection. There the density is transformed into a concave function and the minimum of several tangents is used to construct the hat function. The resulting algorithms are not too complicated and are quite fast. The principle is also applicable to random vectors. A second group of algorithms is presented that generate random variates directly from a given sample by implicitly estimating the unknown distribution. The best of these algorithms are based on the idea of naive resampling plus added noise. These algorithms can be interpreted as sampling from the kernel density estimates. This method can be also applied to random vectors. There it can be interpreted as a mixture of naive resampling and sampling from the multi-normal distribution that has the same covariance matrix as the data. The algorithms described in this paper have been implemented in ANSI C in a library called UNURAN which is available via anonymous ftp. (author's abstract) / Series: Preprint Series / Department of Applied Statistics and Data Processing
|
564 |
Confinement effect on semiconductor nanowires propertiesNduwimana, Alexis 02 November 2007 (has links)
Confinement effect on semiconductor nanowires properties.
Alexis Nduwimana
100 pages
Directed by Dr. Mei-Yin Chou
We study the effect of confinement on various properties of semiconductor
nanowires. First, we study the size and direction dependence of the band gap of
germanium nanowires. We use the density functional theory in the local density approximation. Results shows that the band gap decreases with the diameter The susceptibility of these nanowires is also computed. Second, we look at the confinement effect on the piezoelectric coefficients of ZnO and AlN nanowires. The Berry phase method is used. It is found that depending on passivation, thepiezoelectric effect can decrease or increase. Finally, we study the size and direction dependence of the melting temperature of silicon nanowires. We use the molecular dynamics with the Stillinger Weber potential. Results indicate that the melting temperature increases with the nanowire diameter and that it is direction dependent.
|
565 |
Maximum-likelihood kernel density estimation in high-dimensional feature spaces /| C.M. van der WaltVan der Walt, Christiaan Maarten January 2014 (has links)
With the advent of the internet and advances in computing power, the collection of very large high-dimensional datasets has become feasible { understanding and modelling high-dimensional data has thus become a crucial activity, especially in the field of pattern recognition. Since non-parametric density estimators are data-driven and do not require or impose a pre-defined probability density function on data, they are very powerful tools for probabilistic data modelling and analysis. Conventional non-parametric density estimation methods, however, originated from the field of statistics and were not originally intended to perform density estimation in high-dimensional features spaces { as is often encountered in real-world pattern recognition tasks. Therefore we address the fundamental problem of non-parametric density estimation in high-dimensional feature spaces in this study. Recent advances in maximum-likelihood (ML) kernel density estimation have shown that kernel density estimators hold much promise for estimating nonparametric probability density functions in high-dimensional feature spaces. We therefore derive two new iterative kernel bandwidth estimators from the maximum-likelihood (ML) leave one-out objective function and also introduce a new non-iterative kernel bandwidth estimator (based on the theoretical bounds of the ML bandwidths) for the purpose of bandwidth initialisation. We name the iterative kernel bandwidth estimators the minimum leave-one-out entropy (MLE) and global MLE estimators, and name the non-iterative kernel bandwidth estimator the MLE rule-of-thumb estimator. We compare the performance of the MLE rule-of-thumb estimator and conventional kernel density estimators on artificial data with data properties that are varied in a controlled fashion and on a number of representative real-world pattern recognition tasks, to gain a better understanding of the behaviour of these estimators in high-dimensional spaces and to determine whether these estimators are suitable for initialising the bandwidths of iterative ML bandwidth estimators in high dimensions. We find that there are several regularities in the relative performance of conventional kernel density estimators across different tasks and dimensionalities and that the Silverman rule-of-thumb bandwidth estimator performs reliably across most tasks and dimensionalities of the pattern recognition datasets considered, even in high-dimensional feature spaces. Based on this empirical evidence and the intuitive theoretical motivation that the Silverman estimator optimises the asymptotic mean integrated squared error (assuming a Gaussian reference distribution), we select this estimator to initialise the bandwidths of the iterative ML kernel bandwidth estimators compared in our simulation studies. We then perform a comparative simulation study of the newly introduced iterative MLE estimators and other state-of-the-art iterative ML estimators on a number of artificial and real-world high-dimensional pattern recognition tasks. We illustrate with artificial data (guided by theoretical motivations) under what conditions certain estimators should be preferred and we empirically confirm on real-world data that no estimator performs optimally on all tasks and that the optimal estimator depends on the properties of the underlying density function being estimated. We also observe an interesting case of the bias-variance trade-off where ML estimators with fewer parameters than the MLE estimator perform exceptionally well on a wide variety of tasks; however, for the cases where these estimators do not perform well, the MLE estimator generally performs well. The newly introduced MLE kernel bandwidth estimators prove to be a useful contribution to the field of pattern recognition, since they perform optimally on a number of real-world pattern recognition tasks investigated and provide researchers and
practitioners with two alternative estimators to employ for the task of kernel density
estimation. / PhD (Information Technology), North-West University, Vaal Triangle Campus, 2014
|
566 |
DENSIDADE DE ÁRVORES POR DIÂMETRO NA FLORESTA ESTACIONAL DECIDUAL NO RIO DO GRANDE SUL / DENSITY OF TREES BY DIAMETER IN SEASONAL DECIDUOUS FOREST IN RIO GRANDEDO SULMeyer, Evandro Alcir 28 February 2011 (has links)
Conselho Nacional de Desenvolvimento Científico e Tecnológico / The objective of the work was to study the relationship between the density of trees
per hectare and the average diameter to a Deciduous Forest, as well as adjust the
model to describe this behavior Reineke. The study area is located in the town of
Silveira Martins, in the central region of Rio Grande do Sul and is in early stages of
succession after agriculture. The information about the number of the trees per
hectare and the average diameter were obtained by the method of density-off
proposed by Spurr. These plots were sampled in the early stages of a secondary
forest, picking up areas where the predominant Camboatá-vermelho (Cupania
vernalis). As natural forests have irregular spacing, density is highly variable,
therefore, to select only high-density plots were chosen in areas that there was the
occurrence of dead individuals. Were tested different methods to estimate the upper
limit of the self thinning line: regression analysis (for all data and relative density
greater than 60%), correcting the intercept so that the wastes were negative, the
manual adjustment, the relative density (DR> 90%) and stochastic frontier analysis.
The method that estimated the maximum density was regression analysis with data
from at least 60% of maximum density, obtaining a slope of -1.563 for the model of
Reineke. There was no significant difference between the powers provided by the
different methods. The maximum Stand Density Index was 1779 trees per hectare, to
a dg of 25 cm. The density management diagram was constructed on the basis of
basal area, number of trees per hectare and diameter of the tree of average basal
area. Were used the densities of 15% and 60%, to close the canopy, and the
induction of mortality, respectively. The densities were determined proportionally to
the maximum density by stand density index (PDI) for a reference diameter of 25 cm
in different classes of index 200, since the IDP 1700, to a minimum of 300.
Populations whose density is greater than 60% of the maximum were considered
overstocked, between 60 and 15% fully stocked, and below 15% under stocked.
They recommended a combination of Dendrogram generated in this study with the
method of Spurr to guide interventions in the Deciduous Forest. / O objetivo deste trabalho foi estudar a relação entre a densidade de árvores por
hectare e o diâmetro médio para uma Floresta Estacional Decidual, bem como,
ajustar o modelo Reineke para descrever este comportamento. A área de estudo
localiza-se no município de Silveira Martins, na região central do Rio Grande do Sul
e encontra-se em estágio inicial de sucessão, após uso agrícola. As informações
referentes ao número de árvores por hectare e o diâmetro médio foram obtidas por
meio do método de densidade pontual proposto por Spurr. Estas parcelas foram
amostradas nos estágios iniciais de uma floresta secundária, escolhendo-se áreas
onde predominava o camboatá-vermelho (Cupania vernalis). Como as florestas
naturais apresentam espaçamento irregular, a densidade é bastante variável, assim
sendo, para selecionar apenas parcelas em alta densidade, foram escolhidas áreas
em que se verificava a ocorrência de indivíduos mortos. Foram testados diferentes
métodos para estimar o limite superior da linha de autodesbaste: a análise de
regressão (para todos os dados e densidade relativa maior que 60%), corrigindo o
intercepto para que os resíduos fossem negativos; o ajuste manual; o de densidade
relativa (DR>90%) e a análise de fronteira estocástica. O método que melhor
estimou a máxima densidade foi a análise de regressão com dados de no mínimo
60% da densidade máxima, obtendo um coeficiente angular de -1,563 para o modelo
de Reineke. Não houve diferença significativa entre as potências fornecidas pelos
diferentes métodos. O Índice de Densidade de Povoamento máximo foi de 1779
árvores por hectare, para o dg de 25 cm. O diagrama de manejo da densidade foi
construído em função da área basal, do número de árvores por hectare e do
diâmetro da árvore de área basal média. Foram utilizadas as densidades de 15% e
60%, para o fechamento das copas, e a indução da mortalidade, respectivamente.
Os níveis de densidade foram determinados, proporcionalmente, à máxima
densidade por índice de densidade do povoamento (IDP), para um diâmetro de
referência de 25 cm, em classes de índice de 200, desde o IDP de 1700, até o
mínimo de 300. Populações cuja densidade for maior que 60% da máxima foram
consideradas superestocadas, entre 60 e 15% estocadas, e abaixo de 15%
subestocadas. Recomendou-se a combinação do Dendrograma gerado neste estudo
com o método de Spurr para guiar as intervenções na Floresta Estacional Decidual.
|
567 |
Intermolecular Interactions In Molecular Crystals : Quantitative Estimates From Experimental And Theoretical Charge DensitiesMunshi, Parthapratim 06 1900 (has links) (PDF)
The thesis entitled “Intermolecular Interactions in Molecular Crystals: Quantitative Estimates from Experimental and Theoretical Charge Densities” consists of four chapters and an Appendix. Chapter 1 highlights the principles of crystal engineering from charge density point of view. Chapter 2 (Section I - III) deals with the evaluation of weak intermolecular interactions and in particular related to the features of concomitant polymorphism. Chapter 3 describes the co-operative role of weak interactions in the presence of strong hydrogen bonds in small bioactive molecules in terms of topological properties. Chapter 4 unravels the inter-ion interactions in terms of charge density features in an ionic salt. The general conclusions of the works presented in this thesis are provided at the end of the chapters. Appendix A explores the varieties of hydrogen bonds in a simple molecule.
Identification of intermolecular interactions based purely on distance-angle criteria is inadequate and in the context of ‘quantitative crystal engineering’, recognition of critical points in terms of charge density distribution becomes extremely relevant to justify the occurrence of any interaction in the intermolecular space. The results from single crystal X-ray diffraction data at 90K (compound in chapter 4 at 113K) have been compared with those from periodic theoretical calculations via DFT method at high-level basis set (B3LYP/6-31G**) in order to establish a common platform between theory and experiment.
Chapter 1 gives a brief review on crystal engineering to analyze intermolecular interactions along with the description of both experimental and theoretical approaches used in the analysis of charge densities in molecular crystals. The eight of Koch and Popelier’s criteria, defined using the theory of “Atoms in Molecules”, to characterize hydrogen bonds have also been discussed in detail.
Chapter 2 (I) presents the charge density analysis in coumarin, 1-thiocoumarin, and 3-acetylcoumarin. Coumarin has been extensively studied as it finds applications in several areas of synthetic chemistry, medicinal chemistry, and photochemistry. The packing of molecules in the crystal lattice is governed by weak C−HLO and C−HLπ interactions only. The variations in charge density properties and derived local energy densities have been investigated in these regions of intermolecular interactions. The lacuna of the identification of a lower limit for the hydrogen bond formation has been addressed in terms of all eight of Koch and Popelier’s criteria, to bring out the distinguishing features between a hydrogen bond (C−HLO) and a van der Waals interaction (C−HLπ) for the first time.
Chapter 2 (II) highlights the nature of intermolecular interactions involving sulfur in 1-thiocoumarin, 2-thiocoumarin, and dithiocoumarin. These compounds pack in the crystal lattice mainly via weak C−HLS and SLS interactions. The analysis of experimental and theoretical charge densities clearly categorizes these interactions as pure van der Waals in nature. The distribution of charge densities in the vicinity of the S atom has been analyzed to get better insights into the nature of sulfur in different environments.
Chapter 2 (III) provides a detailed investigation of the charge density distribution in concomitant polymorphs of 3-acetylcoumarin. The electron density maps in the two forms demonstrate the differences in the nature of the charge density distribution particularly in the features associated with C−HLO and C−HLπ interactions. The net charges derived based on the population analysis via multipole refinement and also the charges evaluated via integration over the atomic basins and the molecular dipole moments show significant differences. The lattice energies calculated from experimental charge density approach clearly suggest that form A is thermodynamically stable compared to form B. Mapping of electrostatic potential over the molecular surfaces also bring out the differences between the two forms.
Chapter 3 describes the analysis of charge density distribution in three small bioactive molecules, 2-thiouracil, cytosine monohydrate, and salicylic acid. These molecules pack in the crystal lattice via strong hydrogen bonds, such as N−HLO, N−HLS, and O−HLO. In spite of the presence of such strong hydrogen bonds, the weak interactions like C−HLO and C−HLS also contribute in tandem to the packing features. The distribution of charge densities in intermolecular space provides a quantitative comparison on the strength of both strong and weak interactions. The variations in electronegativity associated with the S, O, and N atoms are clearly seen in the electrostatic potential maps over the molecular surfaces.
Chapter 4 deals with study of intermolecular interactions in N,N,N´N´-tetramethylethlenediammonium dithiocyanate, analyzed based on experimental charge densities from X-ray diffraction data at 113 K and compared with theoretical charge densities. The packing in the crystal lattice is governed mainly by a strong N+−H…N− hydrogen bond along with several weak interactions such as C−HLS, C−HLN, and C−HLπ. The charge density distribution in the region of inter-ionic interaction is also highlighted and the electrostatic potential map clearly provides the insights in to its interacting feature.
Appendix A describes the experimental and theoretical charge density studies in 1-formyl-3-thiosemicarbazide and the assessment of five varieties of hydrogen bonds.
|
568 |
STUDIES ON PARTITION DENSITY FUNCTIONAL THEORYKui Zhang (11642212) 28 July 2022 (has links)
<p>Partition density functional theory (P-DFT) is a density-based embedding method used to calculate the electronic properties of molecules through self-consistent calculations on fragments. P-DFT features a unique set of fragment densities that can be used to define formal charges and local dipoles. This dissertation is concerned mainly with establishing how the optimal fragment densities and energies of P-DFT depend on the specific methods employed during the self-consistent fragment calculations. First, we develop a procedure to perform P-DFT calculations on three-dimensional heteronuclear diatomic molecules, and we compare and contrast two different approaches to deal with non-integer electron numbers: Fractionally occupied orbitals (FOO) and ensemble averages (ENS). We find that, although both ENS and FOO methods lead to the same total energy and density, the ENS fragment densities are less distorted than those of FOO when compared to their isolated counterparts. Second, we formulate partition spin density functional theory (P-SDFT) and perform numerical calculations on closed- and open-shell diatomic molecules. We find that, for closed-shell molecules, while P-SDFT and P-DFT are equivalent for FOO, they partition the same total density of a molecule differently for ENS. For open-shell molecules, P-SDFT and P-DFT yield different sets of fragment densities for both FOO and ENS. Finally, by considering a one-electron system, we investigate the self-interaction error (SIE) produced by approximate exchange-correlation functionals and find that the molecular SIE can be attributed mainly to the non-additive Hartree-exchange-correlation energy.</p>
|
569 |
Computing the Kinetic Energy from Electron Distribution FunctionsChakraborty, Debajit 04 1900 (has links)
<p><strong>ABSTRACT </strong> Approximating the kinetic energy as a functional of the electron density is a daunting, but important, task. For molecules in equilibrium geometries, the kinetic energy is equal in magnitude to the total electronic energy, so achieving the exquisite accuracy in the total energy that is needed for chemical applications requires similar accuracy for the kinetic energy functional. For this reason, most density functional theory (DFT) calculations use the Kohn-Sham method, which provides a good estimate for the kinetic energy. But the computational cost of Kohn-Sham DFT calculations has a direct dependence on the total number of electrons because the Kohn-Sham method is based on the orbital picture, with one orbital per electron. Explicit density functionals, where the kinetic energy is written explicitly in terms of the density, and not in terms of orbitals, are much faster to compute. Unfortunately, the explicit density functionals in the literature had disappointing accuracy. This dissertation introduces several new approaches for orbital-free density functional methods. One can try to include information about the Pauli principle using the exchange hole. In the weighted density approximation (WDA), a model for the exchange hole is used to approximate the one-electron density matrix, which is then used to compute the kinetic energy. This thesis introduces a symmetric, normalized, weighted density approximation using the exchange hole of the uniform electron gas. Though the key results on kinetic energy are not accurate enough, an efficient algorithm is introduced which, with a more sophisticated hole model, might give better results. The effects of electron correlation on the kinetic energy can be modeled by moving beyond the one-electron distribution function (the electron density) to higherorder electron distributions (k-electron DFT). For example, one can model electron correlation directly using the pair electron density. In this thesis, we investigated two different functionals of the pair density, the Weizsäcker functional and the March-Santamaria functional. The Weizsäcker functional badly fails to describe the accurate kinetic energy due to the N-representability problem. The March-Santamaria functional is exact for a single Slater determinant, but fails to adequately model the effects of electron correlation on the kinetic energy. Finally, we established a relation between Fisher information and Weizsäcker kinetic energy functional. This allowed us to propose generalisations of the Weizsäcker kinetic energy density functional. It is hoped that the link between information theory and kinetic energy might provide a new approach to deriving improved kinetic energy functionals. <strong> Keywords: </strong><em>Kinetic energy functional, Density functional theory (DFT), von-Weizsäcker</em> <em> functional, March-Santamaria functional, Thomas-Fermi model, density matrix, Twopoint normalization, Pair-density functional theory (PDFT). </em></p> / Doctor of Science (PhD)
|
570 |
Constant-Flux Inductor with Enclosed-Winding Geometry for Improved Energy DensityCui, Han 11 September 2013 (has links)
The passive components such as inductors and capacitors are bulky parts on circuit boards. Researchers in academia, government, and industry have been searching for ways to improve the magnetic energy density and reduce the package size of magnetic parts. The "constant-flux" concept discussed herein is leveraged to achieve high magnetic-energy density by distributing the magnetic flux uniformly, leading to inductor geometries with a volume significantly lower than that of conventional products. A relatively constant flux distribution is advantageous not only from the density standpoint, but also from the thermal standpoint via the reduction of hot spots, and from the reliability standpoint via the suppression of flux crowding.
For toroidal inductors, adding concentric toroidal cells of magnetic material and distributing the windings properly can successfully make the flux density distribution uniform and thus significantly improve the power density.
Compared with a conventional toroidal inductor, the constant-flux inductor introduced herein has an enclosed-winding geometry. The winding layout inside the core is configured to distribute the magnetic flux relatively uniformly throughout the magnetic volume to obtain a higher energy density and smaller package volume than those of a conventional toroidal inductor.
Techniques to shape the core and to distribute the winding turns to form a desirable field profile is described for one class of magnetic geometries with the winding enclosed by the core. For a given set of input parameters such as the inductor's footprint and thickness, permeability of the magnetic material, maximum permissible magnetic flux density for the allowed core loss, and current rating, the winding geometry can be designed and optimized to achieve the highest time constant, which is the inductance divided by resistance (L/Rdc).
The design procedure is delineated for the constant-flux inductor design together with an example with three winding windows, an inductance of 1.6 µH, and a resistance of 7 mΩ. The constant-flux inductor designed has the same inductance, dc resistance, and footprint area as a commercial counterpart, but half the height.
The uniformity factor α is defined to reflect the uniformity level inside the core volume. For each given magnetic material and given volume, an optimal uniformity factor exists, which has the highest time constant. The time constant varies with the footprint area, inductor thickness, relative permeability of the magnetic material, and uniformity factor. Therefore, the objective for the constant-flux inductor design is to seek the highest possible time constant, so that the constant-flux inductor gives a higher inductance or lower resistance than commercial products of the same volume. The calculated time-constant-density of the constant-flux inductor designed is 4008 s/m3, which is more than two times larger than the 1463 s/m3 of a commercial product.
To validate the concept of constant-flux inductor, various ways of fabrication for the core and the winding were explored in the lab, including the routing process, lasing process on the core, etching technique on copper, and screen printing with silver paste. The most successful results were obtained from the routing process on both the core and the winding. The core from Micrometals has a relative permeability of around 22, and the winding is made of copper sheets 0.5 mm thick. The fabricated inductor prototype shows a significant improvement in energy density: at the same inductance and resistance, the volume of the constant-flux inductor is two times smaller than that of the commercial counterpart.
The constant-flux inductor shows great improvement in energy density and the shrinking of the total size of the inductor below that of the commercial products. Reducing the volume of the magnetic component is beneficial to most power. The study of the constant-flux inductor is currently focused on the dc analysis, and the ac analysis is the next step in the research. / Master of Science
|
Page generated in 0.0609 seconds