• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7916
  • 3170
  • 1596
  • 831
  • 749
  • 716
  • 218
  • 177
  • 149
  • 109
  • 106
  • 106
  • 106
  • 106
  • 106
  • Tagged with
  • 19275
  • 2592
  • 2100
  • 1862
  • 1776
  • 1700
  • 1513
  • 1498
  • 1472
  • 1448
  • 1389
  • 1354
  • 1244
  • 1221
  • 1167
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
311

A national method for predicting environmental pollution

Baverstock, Suzie Jane January 1988 (has links)
No description available.
312

Stanovení hodnoty podniku Zemědělské družstvo Radiměř

Doležalová, Hana January 2011 (has links)
No description available.
313

The aerodynamic design and optimization of a wing-fuselage junction fillet as part of a multi-disciplinary optimization process during the early aircraft design stages

Hadjiilias, Hippokrates A. January 1996 (has links)
An attempt to minimize interference drag in a wing-fuselage junction by means of inserting a fillet is presented in this thesis. The case of a low-wing com- mercial transport aicraft at cruise conditions is examined. Due to the highly three dimensional behaviour of the flow field around the junction, a thin-layer Navier-Stokes code was implemented to estimate the drag forces at the junc- tion. Carefully selected design variable combinations based on-the theory of Design of Experiments constituted the initial group of feasible cases for which the flow solver had to be run. The drag values of these feasible cases were then used to create a second order response surface which could predict with rea- sonable accuracy the interference drag given the value of the design variables within the feasible region. A further optimization isolated the minimum in- terference drag combination of design variable values within the design space. The minimurn interference drag combination of design variable values was eval- uated numerically by the flow solver. The prediction of the response surface and the numerical value obtained by the flow solver for the interference drag of the optimal wing-fuselage combination differed by less than five percent. To demonstrate the ability of the method to be used in an interdisciplinary analysis and optimization program, a landing gear design module is included which provides volume constraints on the fillet geometry during the fillet sur- face definition phase. The Navier Stokes flow analyses were performed on the Cranfield Cray su- percomputer. Each analysis required between eight to twelve CPU hours, and the total CPU time required for the optimization of the six variable model described in the thesis required thirty Navier Stokes runs implementing the Design of Experimens and Surface Response Methodology implementation. For comparison, a typical optimization implementing a classical conjugate di- rections optimizer with no derivative information available would probably require more than forty iterations. Both the optimization and the flow solver results are discussed and some recommendations for improving the efficiency of the code and for further ap- plications of the method are given.
314

Physicochemical measurements by gas chromatography

McGill, Robert Andrew January 1988 (has links)
First the method of gas-liquid chromatography (GLC) has been used to obtain partition coefficients, K, at infinite dilution on polymeric and non-polymeric phases. About 30-40 solutes were studied per stationary phase. Secondly the method of gas-solid chromatography has been used to obtain adsorption isotherms for a series of adsorbents by the technique of elution by characteristic point (ECP). A single injection of a gas or vapour suffices to obtain the isotherm, and then the limiting Henry's law constant, Kh, for adsorption at low surface coverage. About 20-30 solutes were studied per adsorbent. Experiments were carried out at several levels of relative humidity (RH) 0%, 31% and 53%. The solute compounds used were chosen so as to have a wide range of properties such as polarity (pi*2), hydrogen-bond acidity (alpha[H]2), and hydrogen-bond basicity (beta[H]2). The results as log partition coefficients or -log Henry's constants were analysed by multiple linear regression analysis using equations such as: -LogK[H] or LogK = SPo + s.pi* 2 + a.alpha[H]2 + b.beta[H]2 + 1. LogL[18] where L[18] is the solute Ostwald absorption coefficient on n-hexadecane. In this way, the selectivity of the liquid polymeric phase or solid adsorbent towards classes of compound was investigated and equations for the prediction of further values of LogK or LogK[H] formulated. In parallel with the measurement of partition coefficients on liquid polymeric phases by GLC in this work, partition coefficients for the polymers have been determined using surface acoustic wave (SAW) devices by coworkers at the Naval Research Laboratory, Washington. The results for a series of 8-9 solutes in six polymeric phases show that partition coefficients and patterns of responses predicted through GLC experiments are the same as those found experimentally using coated SAW devices. Hence GLC can be used to evaluate possible coating materials, and by the technique of multiple linear regression analysis, to predict SAW responses for a multitude of vapours.
315

Variable selection in high dimensional semi-varying coefficient models

Chen, Chi 06 September 2013 (has links)
With the development of computing and sampling technologies, high dimensionality has become an important characteristic of commonly used science data, such as some data from bioinformatics, information engineering, and the social sciences. The varying coefficient model is a flexible and powerful statistical model for exploring dynamic patterns in many scientific areas. It is a natural extension of classical parametric models with good interpretability, and is becoming increasingly popular in data analysis. The main objective of thesis is to apply the varying coefficient model to analyze high dimensional data, and to investigate the properties of regularization methods for high-dimensional varying coefficient models. We first discuss how to apply local polynomial smoothing and the smoothly clipped absolute deviation (SCAD) penalized methods to estimate varying coefficient models when the dimension of the model is diverging with the sample size. Based on the nonconcave penalized method and local polynomial smoothing, we suggest a regularization method to select significant variables from the model and estimate the corresponding coefficient functions simultaneously. Importantly, our proposed method can also identify constant coefficients at same time. We investigate the asymptotic properties of our proposed method and show that it has the so called “oracle property.” We apply the nonparametric independence Screening (NIS) method to varying coefficient models with ultra-high-dimensional data. Based on the marginal varying coefficient model estimation, we establish the sure independent screening property under some regular conditions for our proposed sure screening method. Combined with our proposed regularization method, we can systematically deal with high-dimensional or ultra-high-dimensional data using varying coefficient models. The nonconcave penalized method is a very effective variable selection method. However, maximizing such a penalized likelihood function is computationally challenging, because the objective functions are nondifferentiable and nonconcave. The local linear approximation (LLA) and local quadratic approximation (LQA) are two popular algorithms for dealing with such optimal problems. In this thesis, we revisit these two algorithms. We investigate the convergence rate of LLA and show that the rate is linear. We also study the statistical properties of the one-step estimate based on LLA under a generalized statistical model with a diverging number of dimensions. We suggest a modified version of LQA to overcome its drawback under high dimensional models. Our proposed method avoids having to calculate the inverse of the Hessian matrix in the modified Newton Raphson algorithm based on LQA. Our proposed methods are investigated by numerical studies and in a real case study in Chapter 5.
316

The use of low temperature infra red spectroscopy in the study of ionic solvation

Strauss, Imants M. January 1981 (has links)
This thesis is concerned with the theoretical and esqjerimental aspects of low temperature infra red spectroscopic studies of ionic solvation. Isotopically dilute methanolic and aqueous solutions were investigated in the fundamental infra red region, particularly in the 0-H stretching region of the spectrum. Systematic room temperature investigations were undertaken for aqueous and methanolic electrolyte solutions as a fimction of the salt concentrations. These results provided certain trends regarding solvent- solute interactions which the low temperature experiments on the same solutions confirmed so that cation and anion solvation models could be put forward. ParticiiLar attention was given to the study of methanolic and aqueous polyatomic ion solutions to provide further evidence as to whether these anions cause the formation of free or weakly bonded 0-H groups. Infra red studies of the tetrahydroborate anion in various pure solvents and binary mixtures suggested an interaction between BH.;and water protons which had spectroscopic characteristics of hydrogenbonding. Finally vibrational studies of methanolic tetraalkylammonium halide solutions in inert and bulk solvent solutions produced a range of anion solvates where both primary and secondary solvation could be observed.
317

Investigation of unsteady separated flow and heat transfer using direct and large eddy simulations

Suksangpanomrung, Anotai 19 January 2018 (has links)
This dissertation presents a numerical analysis of the separated flow and convective heat transfer around a bluff rectangular plate. This geometrically simple “prototype” configuration exhibits all the important features of complex separated and reattaching flow and has the advantage of well defined upstream conditions. The main objective of this work is the investigation of three-dimensional, high Reynolds number, unsteady separated flow using the large eddy simulation technique. However, two-dimensional and three-dimensional low and moderate Reynolds number simulations leading up to this are also of interest. A staggered grid, finite volume method is used in conjunction with a third order Runge-Kutta temporal algorithm. The linear system for pressure is solved by, depending on the case, either a direct method or an efficient conjugate gradient with preconditioning. Two spatial discretizations are used, QUICK and CDS. In order to avoid the numerical diffusion effect from QUICK and dispersive effect from CDS, a mixed discretization is also introduced at high Reynolds number (Red = 50,000). The two-dimensional steady and unsteady simulations are first presented. The predicted flow characteristics are in agreement with those reported in previous numerical studies. The two-dimensional unsteady simulations ( Red = 1,000) provide good insight into the overall dynamic features of separation process, onset of instabilities and pseudo-periodic pattern of vortex formation, pairing and shedding. The realism of the simulation is however constrained by the artificially high coherence of the flow imposed by two-dimensionality. The three-dimensional simulations provide a much improved representation of the flow. Three-dimensional instabilities are found to appear soon after the onset of the shear layer roll-up, and result in the rapid break-up of spanwise vortices. Convective heat transfer simulations highlighting the important role of large scale structures in enhancing turbulent transport are also presented. At high Reynolds number, Red = 50,000, simulations are performed with three subgrid scale models. The selective structure function model, which allows improved localization, yields excellent agreement of the mean flow statistics with available experimental data. The dynamics of the flow is investigated using wavelet transform analysis and coherent structure identification. Characteristic frequencies related to shear layer instability, flapping and vortex shedding are identified consistent with experimental observation. The flow in the reattachment region is highly intermittent and characterized by a complex quasi-cyclic growth and bursting of the separation bubble, and horseshoe structures are identified in the recovery region of the flow. / Graduate
318

Sensory methods used in meat lipid oxidation studies

Noble, Ronald January 1900 (has links)
Master of Science / Food Science Institute / Kadri Koppel / Oxidation of meat decreases consumer acceptance and reduces market value making it an important problem for the meat industry. Odor and flavor of meat are significantly affected by lipid oxidation and researchers continue to explore new ways to control meat oxidation. Natural antioxidants, irradiation and oxygen treatments are major areas of research in meat lipid oxidation. In recent studies researchers have been exploring ways to extend shelf life of meat and in many case rely on sensory results. This report deals with sensory methods used to measure changes associated with treatments and outlines how researchers are using these methods.
319

Probabilistic modelling of genomic trajectories

Campbell, Kieran January 2017 (has links)
The recent advancement of whole-transcriptome gene expression quantification technology - particularly at the single-cell level - has created a wealth of biological data. An increasingly popular unsupervised analysis is to find one dimensional manifolds or trajectories through such data that track the development of some biological process. Such methods may be necessary due to the lack of explicit time series measurements or due to asynchronicity of the biological process at a given time. This thesis aims to recast trajectory inference from high-dimensional "omics" data as a statistical latent variable problem. We begin by examining sources of uncertainty in current approaches and examine the consequences of propagating such uncertainty to downstream analyses. We also introduce a model of switch-like differentiation along trajectories. Next, we consider inferring such trajectories through parametric nonlinear factor analysis models and demonstrate that incorporating information about gene behaviour as informative Bayesian priors improves inference. We then consider the case of bifurcations in data and demonstrate the extent to which they may be modelled using a hierarchical mixture of factor analysers. Finally, we propose a novel type of latent variable model that performs inference of such trajectories in the presence of heterogeneous genetic and environmental backgrounds. We apply this to both single-cell and population-level cancer datasets and propose a nonparametric extension similar to Gaussian Process Latent Variable Models.
320

A numerical method based on Runge-Kutta and Gauss-Legendre integration for solving initial value problems in ordinary differential equations

Prentice, Justin Steven Calder 11 September 2012 (has links)
M.Sc. / A class of numerical methods for solving nonstiff initial value problems in ordinary differential equations has been developed. These methods, designated RKrGLn, are based on a Runge-Kutta method of order r (RKr), and Gauss-Legendre integration over n+ 1 nodes. The interval of integration for the initial value problem is subdivided into an integer number of subintervals. On each of these n + 1 nodes are defined in accordance with the zeros of the Legendre polynomial of degree n. The Runge-Kutta method is used to find an approximate solution at each of these nodes; Gauss-Legendre integration is used to find the solution at the endpoint of the subinterval. The process then carries over to the next subinterval. We find that for a suitable choice of n, the order of the local error of the Runge- Kutta method (r + 1) is preserved in the global error of RKrGLn. However, a poor choice of n can actually limit the order of RKrGLn, irrespective of the choice of r. What is more, the inclusion of Gauss-Legendre integration slightly reduces the number of arithmetical operations required to find a solution, in comparison with RKr at the same number of nodes. These two factors combine to ensure that RKrGLn is considerably more efficient than RKr, particularly when very accurate solutions are sought. Attempts to control the error in RKrGLn have been made. The local error has been successfully controlled using a variable stepsize strategy, similar to that generally used in RK methods. The difference lies in that it is the size of each subinterval that is controlled in RKrGLn, rather than each individual stepsize. Nevertheless, local error has been successfully controlled for relative tolerances ranging from 10 -4 to 10-10 . We have also developed algorithms for estimating and controlling the global error. These algorithms require that a complete solution be obtained for a specified distribution of nodes, after which the global error is estimated and then, if necessary, a new node distribution is determined and another solution obtained. The algorithms are based on Richardson extrapolation and the use of low-order and high-order pairs. The algorithms have successfully achieved desired relative global errors as small as 10-1° . We have briefly studied how RKrGLn may be used to solve stiff systems. We have determined the intervals of stability for several RKrGLn methods on the real line, and used this to develop an algorithm to solve a stiff problem. The algorithm is based on the idea of stepsize/subinterval adjustment, and has been used to successfully solve the van der Pol system. Lagrange interpolation on each subinterval has been implemented to obtain a piecewise continuous polynomial approximation to the numerical solution, with same order error, which can be used to find the solution at arbitrary nodes.

Page generated in 0.0705 seconds