• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 184
  • 66
  • 34
  • 19
  • 8
  • 8
  • 7
  • 6
  • 6
  • 5
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 430
  • 125
  • 93
  • 36
  • 36
  • 31
  • 28
  • 27
  • 26
  • 25
  • 24
  • 23
  • 22
  • 21
  • 21
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Service Quality in the Postal Services in Turkey: A Canonical Approach

Yavas, Ugur 17 November 2000 (has links)
This article reports the results and managerial implications of a Turkish study which investigated relationships between service quality, background characteristics and, customer satisfaction and selected behavioral outcomes.
22

Predicting Motion of Engine-Ingested Particles Using Deep Neural Networks

Bowman, Travis Lynn 01 August 2022 (has links)
The ultimate goal of this work is to facilitate the design of gas turbine engine particle separators by reducing the computational expense to accurately simulate the fluid flow and particle motion inside the separator. It has been well-documented that particle ingestion yields many detrimental impacts for gas turbine engines. The consequences of ice particle ingestion can range from surface-wear abrasion to engine power loss. It is known that sufficiently small particles, characterized by small particle response times (τp), closely follow the fluid trajectory whereas large particles deviate from the streamlines. Rather than manually deriving how the particle acceleration varies from the fluid acceleration, this work chooses to implicitly derive this relationship using machine learning (ML). Inertial particle separators are devices designed to remove particles from the engine intake flow, which contributes to both elongating the lifespan and promoting safer operation of aviation gas turbine engines. Complex flows, such as flow through a particle separator, naturally have rotation and strain present throughout the flow field. This study attempts to understand if the motion of particles within rotational and strained canonical flows can be accurately predicted using supervised ML. This report suggests that preprocessing the ML training data to the fluid streamline coordinates can improve model training. ML models were developed for predicting particle acceleration in laminar, fully rotational/irrotational flows and combined laminar flows with rotation and strain. Lastly, the ML model is applied to particle data extracted from a Computational Fluid Dynamics (CFD) study of particle-laden flow around a louver-geometry. However, the model trained with particle data from combined canonical flows fails to accurately predict particle accelerations in the CFD flow field. / Master of Science / Aviation gas turbine engine particle ingestion is known to reduce engine lifespans and even pose a threat to safe operation in the worst case. Particles being ingested into an engine can be modeled using multiphase flow techniques. Devices called inertial particle separators are designed to remove particles from the flow into the engine. One challenge with designing such a separator is figuring out how to efficiently expel the small particles from the flow while not unnecessarily increasing pressure loss with excessive twists and turns in the geometry. Designers usually have to develop such geometries using multiphase flow computational fluid dynamics (CFD) that solve the fluid and particle dynamics. The abundance of data associated with CFD, and especially multiphase flows make it an ideal application to study with machine learning (ML). Because such multiphase simulations are very computationally expensive, it is desirable to develop "cheaper" methods. This is the long term goal of this work; we want to create ML surrogates that decrease the computational cost of simulating the particle and fluid flow in particle separator geometries such that designs can be iterated more quickly. In this work we introduce how artificial neural networks (ANNs), which are a tool used in ML, can be used to predict particle acceleration in fluid flow. The ANNs are shown to learn the acceleration predictions with acceptable accuracy for the training data generated with canonical flow cases. However, the ML model struggles to become generalizable to actual CFD simulations.
23

Angehrn-Siu type effective base point freeness for quasi-log canonical pairs / 擬対数的標準対に対するアンゲールン-シウ型の有効自由性

Liu, Haidong 25 September 2018 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(理学) / 甲第21328号 / 理博第4424号 / 新制||理||1635(附属図書館) / 京都大学大学院理学研究科数学・数理解析専攻 / (主査)教授 並河 良典, 教授 上 正明, 教授 森脇 淳 / 学位規則第4条第1項該当 / Doctor of Science / Kyoto University / DFAM
24

Effects of Thermostats in Molecular Dynamics Simulations of Nanoindentation

Guduguntla, Varun January 2019 (has links)
No description available.
25

POP-1/CETCF-1 has multiple functions in P ectoblast development

Deshpande, Rashmi Jayant 09 December 2005 (has links)
No description available.
26

Multivariate Applications of Bayesian Model Averaging

Noble, Robert Bruce 04 January 2001 (has links)
The standard methodology when building statistical models has been to use one of several algorithms to systematically search the model space for a good model. If the number of variables is small then all possible models or best subset procedures may be used, but for data sets with a large number of variables, a stepwise procedure is usually implemented. The stepwise procedure of model selection was designed for its computational efficiency and is not guaranteed to find the best model with respect to any optimality criteria. While the model selected may not be the best possible of those in the model space, commonly it is almost as good as the best model. Many times there will be several models that exist that may be competitors of the best model in terms of the selection criterion, but classical model building dictates that a single model be chosen to the exclusion of all others. An alternative to this is Bayesian model averaging (BMA), which uses the information from all models based on how well each is supported by the data. Using BMA allows a variance component due to the uncertainty of the model selection process to be estimated. The variance of any statistic of interest is conditional on the model selected so if there is model uncertainty then variance estimates should reflect this. BMA methodology can also be used for variable assessment since the probability that a given variable is active is readily obtained from the individual model posterior probabilities. The multivariate methods considered in this research are principal components analysis (PCA), canonical variate analysis (CVA), and canonical correlation analysis (CCA). Each method is viewed as a particular multivariate extension of univariate multiple regression. The marginal likelihood of a univariate multiple regression model has been approximated using the Bayes information criteria (BIC), hence the marginal likelihood for these multivariate extensions also makes use of this approximation. One of the main criticisms of multivariate techniques in general is that they are difficult to interpret. To aid interpretation, BMA methodology is used to assess the contribution of each variable to the methods investigated. A second issue that is addressed is displaying of results of an analysis graphically. The goal here is to effectively convey the germane elements of an analysis when BMA is used in order to obtain a clearer picture of what conclusions should be drawn. Finally, the model uncertainty variance component can be estimated using BMA. The variance due to model uncertainty is ignored when the standard model building tenets are used giving overly optimistic variance estimates. Even though the model attained via standard techniques may be adequate, in general, it would be difficult to argue that the chosen model is in fact the correct model. It seems more appropriate to incorporate the information from all plausible models that are well supported by the data to make decisions and to use variance estimates that account for the uncertainty in the model estimation as well as model selection. / Ph. D.
27

Software tools for matrix canonical computations and web-based software library environments

Johansson, Pedher January 2006 (has links)
This dissertation addresses the development and use of novel software tools and environments for the computation and visualization of canonical information as well as stratification hierarchies for matrices and matrix pencils. The simplest standard shape to which a matrix pencil with a given set of eigenvalues can be reduced is called the Kronecker canonical form (KCF). The KCF of a matrix pencil is unique, and all pencils in the manifold of strictly equivalent pencils - collectively termed the orbit - can be reduced to the same canonical form and so have the same canonical structure. For a problem with fixed input size, all orbits are related under small perturbations. These relationships can be represented in a closure hierarchy with a corresponding graph depicting the stratification of these orbits. Since degenerate canonical structures are common in many applications, software tools to determine canonical information, especially under small perturbations, are central to understanding the behavior of these problems. The focus in this dissertation is the development of a software tool called StratiGraph. Its purpose is the computation and visualization of stratification graphs of orbits and bundles (i.e., union of orbits in which the eigenvalues may change) for matrices and matrix pencils. It also supports matrix pairs, which are common in control systems. StratiGraph is extensible by design, and a well documented plug-in feature enables it, for example, to communicate with Matlab(TM). The use and associated benefits of StratiGraph are illustrated via numerous examples. Implementation considerations such as flexible software design, suitable data representations, and good and efficient graph layout algorithms are also discussed. A way to estimate upper and lower bounds on the distance between an input S and other orbits is presented. The lower bounds are of Eckhart-Young type, based on the matrix representation of the associated tangent spaces. The upper bounds are computed as the Frobenius norm F of a perturbation such that S + F is in the manifold defining a specified orbit. Using associated plug-ins to StratiGraph this information can be computed in Matlab, while visualization alongside other canonical information remains within StratiGraph itself. Also, a proposal of functionality and structure of a framework for computation of matrix canonical structure is presented. Robust, well-known algorithms, as well algorithms improved and developed in this work, are used. The framework is implemented as a prototype Matlab toolbox. The intention is to collect software for computing canonical structures as well as for computing bounds and to integrate it with the theory of stratification into a powerful new environment called the MCS toolbox. Finally, a set of utilities for generating web computing environments related to mathematical and engineering library software is presented. The web interface can be accessed from a standard web browser with no need for additional software installation on the local machine. Integration with the control and systems library SLICOT further demonstrates the efficacy of this approach.
28

Random Subspace Analysis on Canonical Correlation of High Dimensional Data

Yamazaki, Ryo January 2016 (has links)
High dimensional, low sample, data have singular sample covariance matrices,rendering them impossible to analyse by regular canonical correlation (CC). Byusing random subspace method (RSM) calculation of canonical correlation be-comes possible, and a Monte Carlo analysis shows resulting maximal CC canreliably distinguish between data with true correlation (above 0.5) and with-out. Statistics gathered from RSMCCA can be used to model true populationcorrelation by beta regression, given certain characteristic of data set. RSM-CCA applied on real biological data however show that the method can besensitive to deviation from normality and high degrees of multi-collinearity.
29

Error in the invariant measure of numerical discretization schemes for canonical sampling of molecular dynamics

Matthews, Charles January 2013 (has links)
Molecular dynamics (MD) computations aim to simulate materials at the atomic level by approximating molecular interactions classically, relying on the Born-Oppenheimer approximation and semi-empirical potential energy functions as an alternative to solving the difficult time-dependent Schrodinger equation. An approximate solution is obtained by discretization in time, with an appropriate algorithm used to advance the state of the system between successive timesteps. Modern MD simulations simulate complex systems with as many as a trillion individual atoms in three spatial dimensions. Many applications use MD to compute ensemble averages of molecular systems at constant temperature. Langevin dynamics approximates the effects of weakly coupling an external energy reservoir to a system of interest, by adding the stochastic Ornstein-Uhlenbeck process to the system momenta, where the resulting trajectories are ergodic with respect to the canonical (Boltzmann-Gibbs) distribution. By solving the resulting stochastic differential equations (SDEs), we can compute trajectories that sample the accessible states of a system at a constant temperature by evolving the dynamics in time. The complexity of the classical potential energy function requires the use of efficient discretization schemes to evolve the dynamics. In this thesis we provide a systematic evaluation of splitting-based methods for the integration of Langevin dynamics. We focus on the weak properties of methods for confiurational sampling in MD, given as the accuracy of averages computed via numerical discretization. Our emphasis is on the application of discretization algorithms to high performance computing (HPC) simulations of a wide variety of phenomena, where configurational sampling is the goal. Our first contribution is to give a framework for the analysis of stochastic splitting methods in the spirit of backward error analysis, which provides, in certain cases, explicit formulae required to correct the errors in observed averages. A second contribution of this thesis is the investigation of the performance of schemes in the overdamped limit of Langevin dynamics (Brownian or Smoluchowski dynamics), showing the inconsistency of some numerical schemes in this limit. A new method is given that is second-order accurate (in law) but requires only one force evaluation per timestep. Finally we compare the performance of our derived schemes against those in common use in MD codes, by comparing the observed errors introduced by each algorithm when sampling a solvated alanine dipeptide molecule, based on our implementation of the schemes in state-of-the-art molecular simulation software. One scheme is found to give exceptional results for the computed averages of functions purely of position.
30

Topology and mass generation mechanisms in abelian gauge field theories

Bertrand, Bruno 09 September 2008 (has links)
Among a number of fundamental issues, the origin of inertial mass remains one of the major open problems in particle physics. Furthermore, topological effects related to non perturbative field configurations are poorly understood in those gauge theories of direct relevance to our physical universe. Motivated by such issues, this Thesis provides a deeper understanding for the appearance of topological effects in abelian gauge field theories, also in relation to the existence of a mass gap for the gauge interactions. These effects are not accounted for when proceeding through gauge fixings as is customary in the literature. The original Topological-Physical factorisation put forth in this work enables to properly identify in topologically massive gauge theories (TMGT) a topological sector which appears under formal limits within the Lagrangian formulation. Our factorisation then allows for a straightforward quantisation of TMGT, accounting for all the topological features inherent to such dynamics. Moreover dual actions are constructed while preserving the gauge symmetry also in the presence of dielectric couplings. All the celebrated mass generation mechanisms preserving the gauge symmetry are then recovered but now find their rightful place through a network of dualities, modulo the presence of topological terms generating topological effects. In particular a dual formulation of the famous Nielsen-Olesen vortices is constructed from TMGT. Within a novel physically equivalent picture, these topological defects are interpreted as dielectric monopoles.

Page generated in 0.0998 seconds