• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 684
  • 252
  • 79
  • 57
  • 42
  • 37
  • 30
  • 26
  • 25
  • 14
  • 9
  • 8
  • 7
  • 7
  • 7
  • Tagged with
  • 1504
  • 1030
  • 249
  • 238
  • 223
  • 215
  • 195
  • 185
  • 167
  • 163
  • 151
  • 124
  • 123
  • 122
  • 111
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1211

On Viscous Flux Discretization Procedures For Finite Volume And Meshless Solvers

Munikrishna, N 06 1900 (has links)
This work deals with discretizing viscous fluxes in the context of unstructured data based finite volume and meshless solvers, two competing methodologies for simulating viscous flows past complex industrial geometries. The two important requirements of a viscous discretization procedure are consistency and positivity. While consistency is a fundamental requirement, positivity is linked to the robustness of the solution methodology. The following advancements are made through this work within the finite volume and meshless frameworks. Finite Volume Method: Several viscous discretization procedures available in the literature are reviewed for: 1. ability to handle general grid elements 2. efficiency, particularly for 3D computations 3. consistency 4. positivity as applied to a model equation 5. global error behavior as applied to a model equation. While some of the popular procedures result in inconsistent formulation, the consistent procedures are observed to be computationally expensive and also have problems associated with robustness. From a systematic global error study, we have observed that even a formally inconsistent scheme exhibits consistency in terms of global error i.e., the global error decreases with grid refinement. This observation is important and also encouraging from the view point of devising a suitable discretization scheme for viscous fluxes. This study suggests that, one can relax the consistency requirement in order to gain in terms of robustness and computational cost, two key ingredients for any industrial flow solver. Some of the procedures are analysed for positivity as applied to a Laplacian and it is found that the two requirements of a viscous discretization procedure, consistency(accuracy) and positivity are essentially conflicting. Based on the review, four representative schemes are selected and used in HIFUN-2D(High resolution Flow Solver on UNstructured Meshes), an unstructured data based cell center finite volume flow solver, to simulate standard laminar and turbulent flow test cases. From the analysis, we can advocate the use of Green Gauss theorem based diamond path procedure which can render high level of robustness to the flow solver for industrial computations. Meshless Method: An Upwind-Least Squares Finite Difference(LSFD-U) meshless solver is developed for simulating viscous flows. Different viscous discretization procedures are proposed and analysed for positivity and the procedure which is found to be more positive is employed. Obtaining suitable point distribution, particularly for viscous flow computations happens to be one of the important components for the success of the meshless solvers. In principle, the meshless solvers can operate on any point distribution obtained using structured, unstructured and Cartesian meshes. But, the Cartesian meshing happens to be the most natural candidate for obtaining the point distribution. Therefore, the performance of LSFD-U for simulating viscous flows using point distribution obtained from Cartesian like grids is evaluated. While we have successfully computed laminar viscous flows, there are difficulties in terms of solving turbulent flows. In this context, we have evolved a strategy to generate suitable point distribution for simulating turbulent flows using meshless solver. The strategy involves a hybrid Cartesian point distribution wherein the region of boundary layer is filled with high aspect ratio body-fitted structured mesh and the potential flow region with unit aspect ratio Cartesian mesh. The main advantage of our solver is in terms of handling the structured and Cartesian grid interface. The interface algorithm is considerably simplified compared to the hybrid Cartesian mesh based finite volume methodology by exploiting the advantage accrue out of the use of meshless solver. Cheap, simple and robust discretization procedures are evolved for both inviscid and viscous fluxes, exploiting the basic features exhibited by the hybrid point distribution. These procedures are also subjected to positivity analysis and a systematic global error study. It should be remarked that the viscous discretization procedure employed in structured grid block is positive and in fact, this feature imparts the required robustness to the solver for computing turbulent flows. We have demonstrated the capability of the meshless solver LSFDU to solve turbulent flow past complex aerodynamic configurations by solving flow past a multi element airfoil configuration. In our view, the success shown by this work in computing turbulent flows can be considered as a landmark development in the area of meshless solvers and has great potential in industrial applications.
1212

Development Of Algorithms For Bad Data Detection In Power System State Estimation

Musti, S S Phaniram 07 1900 (has links)
Power system state estimation (PSSE) is an energy management system function responsible for the computation of the most likely values of state variables viz., bus voltage magnitudes and angles. The state estimation is obtained within a network at a given instant by solving a system of mostly non-linear equations whose parameters are the redundant measurements, both static such as transformer/line parameters and dynamic such as, status of circuit breakers/isolators, transformer tap positions, active/reactive power flows, generator active/reactive power outputs etc. PSSE involves solving an over determined set of nonlinear equations by minimizing a weighted norm of the measurement residuals. Typically, the L1 and L2 norms are employed. The use of L2 norm leads to state estimation based on the weighted least squares (WLS) criterion. This method is known to exhibit efficient filtering capability when the errors are Gaussian but fails in the case of presence of bad data. The method of hypothesis testing identification can be incorporated into the WLS estimator to detect and identify bad data. Nevertheless, it is prone to failure when the measurement is a leverage point. On the other hand state estimation based on the weighted least absolute value (WLAV) criterion using L1 norm, has superior bad data suppression capability. But it also fails in rejecting bad data measurements associated with leverage points. Leverage points are highly influential measurements that attract the state estimator solution towards them. Consequently, much research effort has focused recently, on producing a LAV estimator that remains robust in the presence of bad leverage measurements. This problem has been addressed in the thesis work. Two methods, which aims development of robust estimator that are insensitive to bad leverage points, have been proposed viz., (i) The objective function used here is obtained by linearizing L2 norm of the error function. In addition to the constraints corresponding to measurement set, constraints corresponding to bounds of state variables are also involved. Linear programming (LP) optimization is carried out using upper bound optimization technique. (ii) A hybrid optimization algorithm which is combination of”upper bound optimization technique” and ”an improved algorithm for discrete l1 linear approximation”, to restrict the state variables not to leave the basis during optimization process. Linear programming optimization, with bounds of state variables as additional constraints is carried out using the proposed hybrid optimization algorithm. The proposed state estimator algorithms are tested on 24-bus EHV equivalent of southern power network, 36-bus EHV equivalent of western grid, 205-bus interconnected grid system of southern region and IEEE-39 bus New England system. Performances of the proposed two methods are compared with the WLAV estimator in the presence of bad data associated with leverage points. Also, the effect of bad leverage measurements on the interacting bad data, which are non-leverage, has been compared. Results show that proposed state estimator algorithms rejects bad data associated with leverage points efficiently.
1213

Stability Rates for Linear Ill-Posed Problems with Convolution and Multiplication Operators

Hofmann, B., Fleischer, G. 30 October 1998 (has links) (PDF)
In this paper we deal with the `strength' of ill-posedness for ill-posed linear operator equations Ax = y in Hilbert spaces, where we distinguish according_to_M. Z. Nashed [15] the ill-posedness of type I if A is not compact, but we have R(A) 6= R(A) for the range R(A) of A; and the ill-posedness of type II for compact operators A: From our considerations it seems to follow that the problems with noncompact operators A are not in general `less' ill-posed than the problems with compact operators. We motivate this statement by comparing the approximation and stability behaviour of discrete least-squares solutions and the growth rate of Galerkin matrices in both cases. Ill-posedness measures for compact operators A as discussed in [10] are derived from the decay rate of the nonincreasing sequence of singular values of A. Since singular values do not exist for noncompact operators A; we introduce stability rates in order to have a common measure for the compact and noncompact cases. Properties of these rates are illustrated by means of convolution equations in the compact case and by means of equations with multiplication operators in the noncompact case. Moreover using increasing rearrangements of the multiplier functions specific measures of ill-posedness called ill-posedness rates are considered for the multiplication operators. In this context, the character of sufficient conditions providing convergence rates of Tikhonov regularization are compared for compact operators and multiplication operators.
1214

推薦系統資料插補改良法-電影推薦系統應用 / Improving recommendations through data imputation-with application for movie recommendation

楊智博, Yang, Chih Po Unknown Date (has links)
現今許多網路商店或電子商務將產品銷售給消費者的過程中,皆使用推薦系統的幫助來提高銷售量。如亞馬遜公司(Amazon)、Netflix,深入了解顧客的使用習慣,建構專屬的推薦系統並進行個性化的推薦商品給每一位顧客。 推薦系統應用的技術分為協同過濾和內容過濾兩大類,本研究旨在探討協同過濾推薦系統中潛在因子模型方法,利用矩陣分解法找出評分矩陣。在Koren等人(2009)中,將矩陣分解法的演算法大致分為兩種,隨機梯度下降法(Stochastic gradient descent)與交替最小平方法(Alternating least squares)。本研究主要研究目的有三項,一為比較交替最小平方法與隨機梯度下降法的預測能力,二為兩種矩陣分解演算法在加入偏誤項後的表現,三為先完成交替最小平方法與隨機梯度下降法,以其預測值對原始資料之遺失值進行資料插補,再利用奇異值分解法對完整資料做矩陣分解,觀察其前後方法的差異。 研究結果顯示,隨機梯度下降法所需的運算時間比交替最小平方法所需的運算時間少。另外,完成兩種矩陣分解演算法後,將預測值插補遺失值,進行奇異值分解的結果也顯示預測能力有提升。 / Recommender system has been largely used by Internet companies such Amazon and Netflix to make recommendations for Internet users. Techniques for recommender systems can be divided into content filtering approach and collaborative filtering approach. Matrix factorization is a popular method for collaborative filtering approach. It minimizes the object function through stochastic gradient descent and alternating least squares. This thesis has three goals. First, we compare the alternating least squares method and stochastic gradient descent method. Secondly, we compare the performance of matrix factorization method with and without the bias term. Thirdly, we combine singular value decomposition and matrix factorization. As expected, we found the stochastic gradient descent takes less time than the alternating least squares method, and the the matrix factorization method with bias term gives more accurate prediction. We also found that combining singular value decomposition with matrix factorization can improve the predictive accuracy.
1215

Factors affecting store brand purchase in the Greek grocery market

Sarantidis, Paraskevi January 2012 (has links)
This study is an in-depth investigation of the factors that affect store brand purchases. It aims to help both retailers and manufacturers predict store brand purchases through an improved understanding of the effects of three latent variables: customer satisfaction and loyalty with the store; which is expressed through word-of-mouth; and trust in store brands. An additional aim is to explore variations in the level of store brand adoption and the inter-relationships between the selected constructs. Data was collected through a telephone survey of those responsible for household grocery shopping, and who shop at the nine leading grocery retailers in Greece. A total of 904 respondents completed the questionnaire based upon a quota of 100 respondents for each of the nine retailers. Data were analyzed through chi-square, analysis of variance and partial least square. The proposed model was tested by partial least square path modeling, which related the latent variables to the dependent manifest variable: store brand purchases. The findings provide empirical support that store brand purchases are positively influenced by the consumers’ perceived level of trust in store brands. The consumer decision-making process for store brands is complex and establishing customer satisfaction and loyalty with the store does not appear to influence store brand purchases or the level of trust in the retailer’s store brands in the specific context under study. Consequently the most appropriate way to influence store brand purchases in the Greek market is through increasing in the level of trust in the retailer’s store brands. It is suggested that retailers should therefore invest in trust building strategies for their own store brands and try to capitalize on their brand equity by using a family brand policy. Theoretical and managerial implications of the findings are discussed and opportunities for further research are suggested.
1216

Modelling Implied Volatility of American-Asian Options : A Simple Multivariate Regression Approach

Radeschnig, David January 2015 (has links)
This report focus upon implied volatility for American styled Asian options, and a least squares approximation method as a way of estimating its magnitude. Asian option prices are calculated/approximated based on Quasi-Monte Carlo simulations and least squares regression, where a known volatility is being used as input. A regression tree then empirically builds a database of regression vectors for the implied volatility based on the simulated output of option prices. The mean squared errors between imputed and estimated volatilities are then compared using a five-folded cross-validation test as well as the non-parametric Kruskal-Wallis hypothesis test of equal distributions. The study results in a proposed semi-parametric model for estimating implied volatilities from options. The user must however be aware of that this model may suffer from bias in estimation, and should thereby be used with caution.
1217

Προσαρμοστικές τεχνικές για δέκτες τύπου V-BLAST σε συστήματα MIMO

Βλάχος, Ευάγγελος 03 August 2009 (has links)
Τα ασύρματα συστήματα πολλαπλών κεραιών MIMO αποτελούν ένα από τα βασικά μέτωπα ανάπτυξης των τηλεπικοινωνιών. Ωστόσο η εξαιρετικά τυχαία φύση τους καθώς και η αλληλεπίδραση μεταξύ των πολλαπλών ροών δεδομένων επιβάλει την χρήση σύγχρονων τεχνικών ισοστάθμισης. Η προσαρμοστική ισοστάθμιση στο δέκτη ενός τηλεπικοινωνιακού συστήματος χρησιμοποιείται για την αντιμετώπιση της δυναμικής φύσης του ασύρματου καναλιού και την ανίχνευση των αλλαγών στα χαρακτηριστικά του. Επίσης, μη γραμμικές τεχνικές ισοστάθμισης ανατροφοδότησης συμβόλων είναι απαραίτητες για την απομάκρυνση της διασυμβολικής παρεμβολής που παρουσιάζεται στα συγκεκριμένα συστήματα. Η παρούσα εργασία ασχολείται με μεθόδους προσαρμοστικής ισοστάθμισης στο δέκτη ενός τηλεπικοινωνιακού συστήματος. Διακρίνουμε τις εξής περιπτώσεις προσαρμοστικών αλγορίθμων για την ελαχιστοποίηση του σφάλματος, του αλγορίθμου Αναδρομικών Ελαχίστων Τετραγώνων (RLS), του επαναληπτικού αλγορίθμου Συζυγών Κλίσεων (CG) και του επαναληπτικού αλγορίθμου τροποποιημένων Συζυγών Κλίσεων (MCG). Όπως διαπιστώνουμε, όταν οι παραπάνω αλγόριθμοι χρησιμοποιηθούν με γραμμικές τεχνικές ισοστάθμισης έχουμε πολύ αργή σύγκλιση και γενικά υψηλό όριο σφάλματος. Συμπεραίνουμε λοιπόν ότι, προκειμένου να έχουμε γρήγορη σύγκλιση των προσαρμοστικών αλγορίθμων και αντιμετώπιση της διασυμβολικής παρεμβολής για τα συστήματα MIMO, είναι απαραίτητη η χρήση μη γραμμικών τεχνικών ισοστάθμισης. Αρχικά χρησιμοποιούμε την μέθοδο της γενικευμένης ανατροφοδότησης συμβόλων GDFE ενώ στη συνέχεια μελετάμε μία σύγχρονη τεχνική ανατροφοδότησης συμβόλων που χρησιμοποιεί ένα κριτήριο διάταξης για την ακύρωση των συμβόλων (OSIC ή V-BLAST). Όπως διαπιστώνεται και από τις εξομοιώσεις η συγκεκριμένη τεχνική επιτυγχάνει το χαμηλότερο όριο σφάλματος, αλλά με αυξημένο υπολογιστικό κόστος. Επίσης, διαπιστώνουμε ότι η εφαρμογή της τεχνικής αυτής με χρήση του τροποποιημένου αλγορίθμου Συζυγών Κλίσεων δεν είναι εφικτή. Στα πλαίσια αυτής της εργασίας, περιγράφουμε μια συγκεκριμένη υλοποίηση της τεχνικής διατεταγμένης ακύρωσης που κάνει χρήση του αλγορίθμου Αναδρομικών Ελαχίστων Τετραγώνων με μειωμένη πολυπλοκότητα. Στη συνέχεια γενικεύουμε την εφαρμογή της για την περίπτωση των αλγορίθμων Συζυγών Κλίσεων, και διαπιστώνουμε ότι ο τροποποιημένος αλγόριθμος Συζυγών Κλίσεων δεν μπορεί να χρησιμοποιηθεί ούτε σε αυτήν την περίπτωση. Για την υλοποίηση ενός συστήματος OSIC με χρήση του αλγορίθμου Συζυγών Κλίσεων είναι απαραίτητη η χρήση ενός αλγορίθμου που δεν έχει χρονική εξάρτηση σύγκλισης, όπως είναι ο βασικός αλγόριθμος Συζυγών Κλίσεων. / Wireless systems with multiple antenna configurations has recently emerged as one of the most significant technical breaktroughs in modern communications. However, because of the extremly random nature of the wireless channels, we have to use modern equalization methods in order to defeat the signal degradation. Adaptive equalization at the receiver of the telecommunication system can be used to compete this dynamic nature of the wireless channel and track the changes of its characteristics. Furthermore, nonlinear decision feedback methods are nessesary for the cancellation of the intersymbol interference which occurs with these systems. This work involves with adaptive equalization methods at the receiver of the telecommunication system. We use the following adaptive algorithms so as to minimize the error : the Recursive Least Squares algorithm (RLS), the iterative Conjugate Gradient algorithm (CG) and the iterative Modified Conjugate Gradient algorithm (MCG). When these algorithms are used with linear methods, they give very slow converge and high final error. So, it is neccessary to use nonlinear equalization methods in order to succeed fast converge rate and deal with the increazed intersymbol interference for MIMO systems. Firstly we use the generalized decision feedback method (GDFE), and then the modern method of ordered successive cancellation method (OSIC or V-BLAST). Based on the emulations we conclude that the last method succeed the lower error, but with high computational cost. Furthermore, we can't use OSIC method with Modified Conjugate Gradient algorithm. In this work, we describe a specific implementation of the OSIC method which uses RLS algorithm with low computational complexity. So we generalize its usage with the Conjugate Gradient algorithms. Finaly, we conclude that we can't also use MCG with OSIC method with low computational complexity. In order to construct an OSIC system based on Conjugate Gradient algorithm, the algorithm must not operate on time basis, like basic Conjugate Gradient algorithm does.
1218

Multivariate design of molecular docking experiments : An investigation of protein-ligand interactions

Andersson, David January 2010 (has links)
To be able to make informed descicions regarding the research of new drug molecules (ligands), it is crucial to have access to information regarding the chemical interaction between the drug and its biological target (protein). Computer-based methods have a given role in drug research today and, by using methods such as molecular docking, it is possible to investigate the way in which ligands and proteins interact. Despite the acceleration in computer power experienced in the last decades many problems persist in modelling these complicated interactions. The main objective of this thesis was to investigate and improve molecular modelling methods aimed to estimate protein-ligand binding. In order to do so, we have utilised chemometric tools, e.g. design of experiments (DoE) and principal component analysis (PCA), in the field of molecular modelling. More specifically, molecular docking was investigated as a tool for reproduction of ligand poses in protein 3D structures and for virtual screening. Adjustable parameters in two docking software were varied using DoE and parameter settings were identified which lead to improved results. In an additional study, we explored the nature of ligand-binding cavities in proteins since they are important factors in protein-ligand interactions, especially in the prediction of the function of newly found proteins. We developed a strategy, comprising a new set of descriptors and PCA, to map proteins based on their cavity physicochemical properties. Finally, we applied our developed strategies to design a set of glycopeptides which were used to study autoimmune arthritis. A combination of docking and statistical molecular design, synthesis and biological evaluation led to new binders for two different class II MHC proteins and recognition by a panel of T-cell hybridomas. New and interesting SAR conclusions could be drawn and the results will serve as a basis for selection of peptides to include in in vivo studies.
1219

New Techniques for Estimation of Source Parameters : Applications to Airborne Gravity and Pseudo-Gravity Gradient Tensors

Beiki, Majid January 2011 (has links)
Gravity gradient tensor (GGT) data contains the second derivatives of the Earth’s gravitational potential in three orthogonal directions. GGT data can be measured either using land, airborne, marine or space platforms. In the last two decades, the applications of GGT data in hydrocarbon exploration, mineral exploration and structural geology have increased considerably. This work focuses on developing new interpretation techniques for GGT data as well as pseudo-gravity gradient tensor (PGGT) derived from measured magnetic field. The applications of developed methods are demonstrated on a GGT data set from the Vredefort impact structure, South Africa and a magnetic data set from the Särna area, west central Sweden. The eigenvectors of the symmetric GGT can be used to estimate the position of the causative body as well as its strike direction. For a given measurement point, the eigenvector corresponding to the maximum eigenvalue points approximately toward the center of mass of the source body. For quasi 2D structures, the strike direction of the source can be estimated from the direction of the eigenvectors corresponding to the smallest eigenvalues. The same properties of GGT are valid for the pseudo-gravity gradient tensor (PGGT) derived from magnetic field data assuming that the magnetization direction is known. The analytic signal concept is applied to GGT data in three dimensions. Three analytic signal functions are introduced along x-, y- and z-directions which are called directional analytic signals. The directional analytic signals are homogenous and satisfy Euler’s homogeneity equation. Euler deconvolution of directional analytic signals can be used to locate causative bodies. The structural index of the gravity field is automatically identified from solving three Euler equations derived from the GGT for a set of data points located within a square window with adjustable size. For 2D causative bodies with geometry striking in the y-direction, the measured gxz and gzz components of GGT can be jointly inverted for estimating the parameters of infinite dike and geological contact models. Once the strike direction of 2D causative body is estimated, the measured components can be transformed into the strike coordinate system. The GGT data within a set of square windows for both infinite dike and geological contact models are deconvolved and the best model is chosen based on the smallest data fit error. / Felaktigt tryckt som Digital Comprehensive Summaries of Uppsala Dissertations from the Faculty of Science and Technology 730
1220

Change point estimation in noisy Hammerstein integral equations / Sprungstellen-Schätzer für verrauschte Hammerstein Integral Gleichungen

Frick, Sophie 02 December 2010 (has links)
No description available.

Page generated in 0.0261 seconds