Spelling suggestions: "subject:"model size"" "subject:"godel size""
1 |
Nanofluid Thermal Conductivity - a thermo-mechanical, chemical structure and computational approachYiannou, Angelos January 2015 (has links)
Multiple papers have been published which attempt to predict the thermal conductivity or thermal diffusivity of graphite “nanofluids” 1–6. In each of the papers empirical methods (with no consideration of quantum mechanical principles or a structural reference) are employed in an attempt to understand and predict the heat transfer characteristics of a nanofluid. However, the results of each of these papers vary considerably. The primary reason for this may relate to the inability to construct a representative material model (based on the chemical structure), that can accurately predict the thermal enhancement properties based on the intercalated adsorption of a fluid with a noticeable heat capacity 3.
This project has strived to simulate the interaction of (nano-scale) graphite particles “dispersed” in water (at the structural level of effective surface “wetting”). The ultimate purpose is to enhance the heat conduction capacity. The strategy was to initially focus on the structural properties of the graphite powder, followed by incremental exposure to water molecules to achieve a representative model. The procedure followed includes these experimental steps:
a) Molecular resolution porosimetry (i.e. BET) experiments, to determine the graphene “platelet” surface area to correlate with the minimum crystallite size (where a single graphite crystal is made up of multiple unit cells) of the graphite powder samples.
b) Powder X-ray diffraction (PXRD) analyses of the graphite powder samples each supplied by different manufacturers in order to determine their respective crystallographic structures.
c) Infrared (IR) and Raman vibrational spectra characterisation of all of the graphite powder samples for further structure confirmation.
d) Thermo-gravimetric analysis (TGA) of graphite powder and water mixture samples, to try and determine the point at which the bulk water has separated and evaporated away from the graphite powder/water mixture, resulting in a minimum layer of water adsorbed on the graphite surface and inter-/intra-particle graphite spaces.
e) Differential scanning calorimetry (DSC) of the “dry” and “surface-wetted” graphite samples to determine their specific heat capacities.
f) Laser flash analysis (LFA) of the “dry” and “surface-wetted” graphite samples to determine their thermal diffusivity and thermal conductivity.
g) The computer simulated analysis of the graphite powder exposed to water by means of computational modelling, to elucidate the various conformational approaches of water onto the graphite surface and the associated thermodynamic behaviour of water molecules ad/absorbed at the graphite surface. Data from the TGA measurements allowed for the determination of the appropriate amount of water required in order to not only experimentally prepare graphite “surface-wetted” samples, but also to determine the effective amount of absorbed water to be considered in the computational models. For experimental verification, both dry and wet graphite samples should then be used in a laser flash analysis (LFA), in order to elucidate the effect the interfacial layer of water has on the thermal properties of graphite.
A computerised model of a single graphite crystal exposed to water was created using the MedeA (v. 2.14) modelling software. The conformational behaviour and energy states of a cluster of water molecules on the graphite surface were then analysed by using the VASP 5.3 software (based on a periodic solid-state model approach with boundary conditions), to determine the energetics, atomic structure and graphite surface “wetting” characteristics, at the atomistic level. The results of the computerised model were correlated to the physical experiments and to previously published figures.
Only once a clear understanding of the way in which water molecules interact with the graphite surfaces has been obtained, can further investigation follow to resolve the effect that exposure of larger graphite surfaces to polar solvents (such as water and lubricants) will have on the heat conductance of such ensembles. This scope of further work will constitute a PhD study. / Dissertation (MEng)--University of Pretoria, 2015. / tm2015 / Mechanical and Aeronautical Engineering / MEng / Unrestricted
|
2 |
Optimizing neural network structures: faster speed, smaller size, less tuningLi, Zhe 01 January 2018 (has links)
Deep neural networks have achieved tremendous success in many domains (e.g., computer vision~\cite{Alexnet12,vggnet15,fastrcnn15}, speech recognition~\cite{hinton2012deep,dahl2012context}, natural language processing~\cite{dahl2012context,collobert2011natural}, games~\cite{silver2017mastering,silver2016mastering}), however, there are still many challenges in deep learning comunity such as how to speed up training large deep neural networks, how to compress large nerual networks for mobile/embed device without performance loss, how to automatically design the optimal network structures for a certain task, and how to further design the optimal networks with improved performance and certain model size with reduced computation cost.
To speed up training large neural networks, we propose to use multinomial sampling for dropout, i.e., sampling features or neurons according to a multinomial distribution with different probabilities for different features/neurons. To exhibit the optimal dropout probabilities, we analyze the shallow learning with multinomial dropout and establish the risk bound for stochastic optimization. By minimizing a sampling dependent factor in the risk bound, we obtain a distribution-dependent dropout with sampling probabilities dependent on the second order statistics of the data distribution. To tackle the issue of evolving distribution of neurons in deep learning, we propose an efficient adaptive dropout (named evolutional dropout) that computes the sampling probabilities on-the-fly from a mini-batch of examples.
To compress large neural network structures, we propose a simple yet powerful method for compressing the size of deep Convolutional Neural Networks (CNNs) based on parameter binarization. The striking difference from most previous work on parameter binarization/quantization lies at different treatments of $1\times 1$ convolutions and $k\times k$ convolutions ($k>1$), where we only binarize $k\times k$ convolutions into binary patterns. By doing this, we show that previous deep CNNs such as GoogLeNet and Inception-type Nets can be compressed dramatically with marginal drop in performance. Second, in light of the different functionalities of $1\times 1$ (data projection/transformation) and $k\times k$ convolutions (pattern extraction), we propose a new block structure codenamed the pattern residual block that adds transformed feature maps generated by $1\times 1$ convolutions to the pattern feature maps generated by $k\times k$ convolutions, based on which we design a small network with $\sim 1$ million parameters. Combining with our parameter binarization, we achieve better performance on ImageNet than using similar sized networks including recently released Google MobileNets.
To automatically design neural networks, we study how to design a genetic programming approach for optimizing the structure of a CNN for a given task under limited computational resources yet without imposing strong restrictions on the search space. To reduce the computational costs, we propose two general strategies that are observed to be helpful: (i) aggressively selecting strongest individuals for survival and reproduction, and killing weaker individuals at a very early age; (ii) increasing mutation frequency to encourage diversity and faster evolution. The combined strategy with additional optimization techniques allows us to explore a large search space but with affordable computational costs.
To further design the optimal networks with improved performance and certain model size under reduced computation cost, we propose an ecologically inspired genetic approach for neural network structure search , that includes two types of succession: primary and secondary succession as well as accelerated extinction. Specifically, we first use primary succession to rapidly evolve a community of poor initialized neural network structures into a more diverse community, followed by a secondary succession stage for fine-grained searching based on the networks from the primary succession. Accelerated extinction is applied in both stages to reduce computational cost. In addition, we also introduce the gene duplication to further utilize the novel block of layers that appeared in the discovered network structure.
|
3 |
Určování lomově-mechanických charakteristik z podrozměrných zkušebních těles / Determination of Fracture Mechanical Characteristics From Sub-Size SpecimensStratil, Luděk Unknown Date (has links)
The standards of fracture toughness determination prescribe size requirements for size of test specimens. In cases of limited amount of test material miniature test specimens offer one from the possibilities of fracture toughness evaluation. Because of small loaded volumes in these specimens at the crack tip the loss of constraint occur affecting measured values of fracture toughness. In such cases the size requirements for valid fracture toughness characteristics determination are not fulfilled. These specimens can be even on limits of load range of test devices and handle manipulation by their small dimensions. The important task related to these specimens is, apart from methodology of their preparation and measurement of deformations, the interpretation of measured values of fracture toughness and their possible correction to standard test specimens. Moreover, in the upper shelf region of fracture toughness quantification and interpretation of size effects is still not resolved sufficiently. This thesis is by its aims experimentally computational study focused on evaluation of size effect on fracture toughness in the upper shelf region. The size effect was quantified by testing of miniature and large specimens’ sizes in order to determine J R curves. Two geometries of miniature test specimens, there point bend specimen and CT specimen, were used. The experimental materials were advanced steels developed for applications in nuclear and power industry, Eurofer97 steel and ODS steel MA956. Finite elements analyses of realized tests together with application of micromechanical model of ductile fracture were carried out in order to evaluate stress strain fields at the crack tip in tested specimens from Eurofer97 steel. By comparison of experimental results and numerical simulations of J R curves the mutual dependencies between geometry of specimens and element sizes at the crack tip were derived. On the basis of acquired relationships, the methodology of J R curve prediction for standard specimen size from limited amount of test material was proposed. Main contribution of thesis is description of effect of material’s fracture toughness level on resistance against ductile crack propagation in miniature specimens. For material where significant crack growth occurs after exceeding the limit values of J integral (Eurofer97), the loss of constraint is considerable and highly decreases resistance against tearing. Miniature specimens then show significantly lower J R curves in comparison with standard size specimens. This effect is the opposite to the behaviour of miniature specimens in transition region. In case of material with low toughness, in which significant crack growth occurs in the region of J integral validity (ODS MA956), the effect of constraint loss is small without large impact on resistance against tearing. In such case miniature specimens demonstrate comparable J R curves as specimens of larger sizes. Next important contribution is proposed methodology for prediction of J R curve from small amount of test material using micromechanical modeling.
|
Page generated in 0.1064 seconds