• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 682
  • 252
  • 79
  • 57
  • 42
  • 37
  • 30
  • 26
  • 25
  • 14
  • 9
  • 8
  • 7
  • 7
  • 7
  • Tagged with
  • 1503
  • 1029
  • 249
  • 238
  • 223
  • 215
  • 195
  • 185
  • 167
  • 163
  • 151
  • 124
  • 123
  • 122
  • 111
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
571

Utformning av mjukvarusensorer för avloppsvatten med multivariata analysmetoder / Design of soft sensors for wastewater with multivariate analysis

Abrahamsson, Sandra January 2013 (has links)
Varje studie av en verklig process eller ett verkligt system är baserat på mätdata. Förr var den tillgängliga datamängden vid undersökningar ytterst begränsad, men med dagens teknik är mätdata betydligt mer lättillgängligt. Från att tidigare enbart haft få och ofta osammanhängande mätningar för någon enstaka variabel, till att ha många och så gott som kontinuerliga mätningar på ett större antal variabler. Detta förändrar möjligheterna att förstå och beskriva processer avsevärt. Multivariat analys används ofta när stora datamängder med många variabler utvärderas. I det här projektet har de multivariata analysmetoderna PCA (principalkomponentanalys) och PLS (partial least squares projection to latent structures) använts på data över avloppsvatten insamlat på Hammarby Sjöstadsverk. På reningsverken ställs idag allt hårdare krav från samhället för att de ska minska sin miljöpåverkan. Med bland annat bättre processkunskaper kan systemen övervakas och styras så att resursförbrukningen minskas utan att försämra reningsgraden. Vissa variabler är lätta att mäta direkt i vattnet medan andra kräver mer omfattande laboratorieanalyser. Några parametrar i den senare kategorin som är viktiga för reningsgraden är avloppsvattnets innehåll av fosfor och kväve, vilka bland annat kräver resurser i form av kemikalier till fosforfällning och energi till luftning av det biologiska reningssteget. Halterna av dessa ämnen i inkommande vatten varierar under dygnet och är svåra att övervaka. Syftet med den här studien var att undersöka om det är möjligt att utifrån lättmätbara variabler erhålla information om de mer svårmätbara variablerna i avloppsvattnet genom att utnyttja multivariata analysmetoder för att skapa modeller över variablerna. Modellerna kallas ofta för mjukvarusensorer (soft sensors) eftersom de inte utgörs av fysiska sensorer. Mätningar på avloppsvattnet i Linje 1 gjordes under tidsperioden 11 – 15 mars 2013 på flera ställen i processen. Därefter skapades flera multivariata modeller för att försöka förklara de svårmätbara variablerna. Resultatet visar att det går att erhålla information om variablerna med PLS-modeller som bygger på mer lättillgänglig data. De framtagna modellerna fungerade bäst för att förklara inkommande kväve, men för att verkligen säkerställa modellernas riktighet bör ytterligare validering ske. / Studies of real processes are based on measured data. In the past, the amount of available data was very limited. However, with modern technology, the information which is possible to obtain from measurements is more available, which considerably alters the possibility to understand and describe processes. Multivariate analysis is often used when large datasets which contains many variables are evaluated. In this thesis, the multivariate analysis methods PCA (principal component analysis) and PLS (partial least squares projection to latent structures) has been applied to wastewater data collected at Hammarby Sjöstadsverk WWTP (wastewater treatment plant). Wastewater treatment plants are required to monitor and control their systems in order to reduce their environmental impact. With improved knowledge of the processes involved, the impact can be significantly decreased without affecting the plant efficiency. Several variables are easy to measure directly in the water, while other require extensive laboratory analysis. Some of the parameters from the latter category are the contents of phosphorus and nitrogen in the water, both of which are important for the wastewater treatment results. The concentrations of these substances in the inlet water vary during the day and are difficult to monitor properly. The purpose of this study was to investigate whether it is possible, from the more easily measured variables, to obtain information on those which require more extensive analysis. This was done by using multivariate analysis to create models attempting to explain the variation in these variables. The models are commonly referred to as soft sensors, since they don’t actually make use of any physical sensors to measure the relevant variable. Data were collected during the period of March 11 to March 15, 2013 in the wastewater at different stages of the treatment process and a number of multivariate models were created. The result shows that it is possible to obtain information about the variables with PLS models based on easy-to-measure variables. The best created model was the one explaining the concentration of nitrogen in the inlet water.
572

Nonnegative matrix and tensor factorizations, least squares problems, and applications

Kim, Jingu 14 November 2011 (has links)
Nonnegative matrix factorization (NMF) is a useful dimension reduction method that has been investigated and applied in various areas. NMF is considered for high-dimensional data in which each element has a nonnegative value, and it provides a low-rank approximation formed by factors whose elements are also nonnegative. The nonnegativity constraints imposed on the low-rank factors not only enable natural interpretation but also reveal the hidden structure of data. Extending the benefits of NMF to multidimensional arrays, nonnegative tensor factorization (NTF) has been shown to be successful in analyzing complicated data sets. Despite the success, NMF and NTF have been actively developed only in the recent decade, and algorithmic strategies for computing NMF and NTF have not been fully studied. In this thesis, computational challenges regarding NMF, NTF, and related least squares problems are addressed. First, efficient algorithms of NMF and NTF are investigated based on a connection from the NMF and the NTF problems to the nonnegativity-constrained least squares (NLS) problems. A key strategy is to observe typical structure of the NLS problems arising in the NMF and the NTF computation and design a fast algorithm utilizing the structure. We propose an accelerated block principal pivoting method to solve the NLS problems, thereby significantly speeding up the NMF and NTF computation. Implementation results with synthetic and real-world data sets validate the efficiency of the proposed method. In addition, a theoretical result on the classical active-set method for rank-deficient NLS problems is presented. Although the block principal pivoting method appears generally more efficient than the active-set method for the NLS problems, it is not applicable for rank-deficient cases. We show that the active-set method with a proper starting vector can actually solve the rank-deficient NLS problems without ever running into rank-deficient least squares problems during iterations. Going beyond the NLS problems, it is presented that a block principal pivoting strategy can also be applied to the l1-regularized linear regression. The l1-regularized linear regression, also known as the Lasso, has been very popular due to its ability to promote sparse solutions. Solving this problem is difficult because the l1-regularization term is not differentiable. A block principal pivoting method and its variant, which overcome a limitation of previous active-set methods, are proposed for this problem with successful experimental results. Finally, a group-sparsity regularization method for NMF is presented. A recent challenge in data analysis for science and engineering is that data are often represented in a structured way. In particular, many data mining tasks have to deal with group-structured prior information, where features or data items are organized into groups. Motivated by an observation that features or data items that belong to a group are expected to share the same sparsity pattern in their latent factor representations, We propose mixed-norm regularization to promote group-level sparsity. Efficient convex optimization methods for dealing with the regularization terms are presented along with computational comparisons between them. Application examples of the proposed method in factor recovery, semi-supervised clustering, and multilingual text analysis are presented.
573

Implicit Least Squares Kinetic Upwind Method (LSKUM) And Implicit LSKUM Based On Entropy Variables (q-LSKUM)

Swarup, A Sri Sakti 07 1900 (has links)
With increasing demand for computational solutions of fluid dynamical problems, researchers around the world are working on the development of highly robust numerical schemes capable of solving flow problems around complex geometries arising in Aerospace engineering. Also considerable time and effort are devoted to development of convergence acceleration devices, for reducing the computational time required for such numerical solutions. Reduction in run times is very vital for production codes which are used many times in design cycle. In this present work, we consider a numerical scheme called LSKUM capable of operating on any arbitrary distribution of points. LSKUM is being used in CFD center (IIsc) and DRDL (Hyderabad) to compute flows around practical geometries and presently these LSKUM based codes are explicit- It has been observed already by the earlier researchers that the explicit schemes for these methods are robust. Therefore, it is absolutely essential to consider the possibility of accelerating explicit LSKUM by making it LSKUM-Implicit. The present thesis focuses on such a study. We start with two kinetic schemes namely Least Squares Kinetic Upwind Method (LSKUM) and LSKUM based on entropy variables (q-LSKUM). We have developed the following two implicit schemes using LSKUM and q-LSKUM. They are (i)Non-Linear Iterative Implicit Scheme called LSKUM-NII. (ii)Linearized Beam and Warming implicit Scheme, called LSKUM-BW. For the purpose of demonstration of efficiency of the newly developed above implicit schemes, we have considered flow past NACA0012 airfoil as a test example. In this regard we have tested these implicit schemes for flow regimes mentioned below •Subsonic Case: M∞ = 0.63, a.o.a = 2.0° •Transonic Case: M∞ = 0.85, a.o.a = 1.0° The speedup of the above two implicit schemes has been studied in this thesis by operating them on different grid sizes given below •Coarse Grid: 4074 points •Medium Grid: 8088 points •Fine Grid: 16594 points The results obtained by running these implicit schemes are found to be very much encouraging. It has been observed that these newly developed implicit schemes give as much as 2.8 times speedup compared to their corresponding explicit versions. Further improvement is possible by combining LKSUM-Implicit with modern iterative methods of solving resultant algebraic equations. The present work is a first step towards this objective.
574

Weighted Least Squares Kinetic Upwind Method Using Eigendirections (WLSKUM-ED)

Arora, Konark 11 1900 (has links)
Least Squares Kinetic Upwind Method (LSKUM), a grid free method based on kinetic schemes has been gaining popularity over the conventional CFD methods for computation of inviscid and viscous compressible flows past complex configurations. The main reason for the growth of popularity of this method is its ability to work on any point distribution. The grid free methods do not require the grid for flow simulation, which is an essential requirement for all other conventional CFD methods. However, they do require point distribution or a cloud of points. Point generation is relatively simple and less time consuming to generate as compared to grid generation. There are various methods for point generation like an advancing front method, a quadtree based point generation method, a structured grid generator, an unstructured grid generator or a combination of above, etc. One of the easiest ways of point generation around complex geometries is to overlap the simple point distributions generated around individual constituent parts of the complex geometry. The least squares grid free method has been successfully used to solve a large number of flow problems over the years. However, it has been observed that some problems are still encountered while using this method on point distributions around complex configurations. Close analysis of the problems have revealed that bad connectivity of the nodes is the cause and this leads to bad connectivity related code divergence. The least squares (LS) grid free method called LSKUM involves discretization of the spatial derivatives using the least squares approach. The formulae for the spatial derivatives are obtained by minimizing the sum of the squares of the error, leading to a system of linear algebraic equations whose solution gives us the formulae for the spatial derivatives. The least squares matrix A for 1-D and 2-D cases respectively is given by (Refer PDF File for equation) The 1-D LS formula for the spatial derivatives is always well behaved in the sense that ∑∆xi2 can never become zero. In case of 2-D problems can arise. It is observed that the elements of the Ls matrix A are functions of the coordinate differentials of the nodes in the connectivity. The bad connectivity of a node thus can have an adverse effect on the nature of the LS matrices. There are various types of bad connectivities for a node like insufficient number of nodes in the connectivity, highly anisotropic distribution of nodes in the connectivity stencil, the nodes falling nearly on a line (or a plane in 3-D), etc. In case of multidimensions, the case of all nodes in a line will make the matrix A singular thereby making its inversion impossible. Also, an anisotropic distribution of nodes in the connectivity can make the matrix A highly illconditioned thus leading to either loss in accuracy or code divergence. To overcome this problem, the approach followed so far is to modify the connectivity by including more neighbours in the connectivity of the node. In this thesis, we have followed a different approach of using weights to alter the nature of the LS matrix A. (Refer PDF File for equation) The weighted LS formulae for the spatial derivatives in 1-D and 2-D respectively are are all positive. So we ask a question : Can we reduce the multidimensional LS formula for the derivatives to the 1-D type formula and make use of the advantages of 1-D type formula in multidimensions? Taking a closer look at the LS matrices, we observe that these are real and symmetric matrices with real eigenvalues and a real and distinct set of eigenvectors. The eigenvectors of these matrices are orthogonal. Along the eigendirections, the corresponding LS formulae reduce to the 1-D type formulae. But a problem now arises in combining the eigendirections along with upwinding. Upwinding, which in LS is done by stencil splitting, is essential to provide stability to the numerical scheme. It involves choosing a direction for enforcing upwinding. The stencil is split along the chosen direction. But it is not necessary that the chosen direction is along one of the eigendirections of the split stencil. Thus in general we will not be able to use the 1-D type formulae along the chosen direction. This difficulty has been overcome by the use of weights leading to WLSKUM-ED (Weighted Least Squares Kinetic Upwind Method using Eigendirections). In WLSKUM-ED weights are suitably chosen so that a chosen direction becomes an eigendirection of A(w). As a result, the multi-dimensional LS formulae reduce to 1-D type formulae along the eigendirections. All the advantages of the 1-D LS formuale can thus be made use of even in multi-dimensions. A very simple and novel way to calculate the positive weights, utilizing the coordinate differentials of the neighbouring nodes in the connectivity in 2-D and 3-D, has been developed for the purpose. This method is based on the fact that the summations of the coordinate differentials are of different signs (+ or -) in different quadrants or octants of the split stencil. It is shown that choice of suitable weights is equivalent to a suitable decomposition of vector space. The weights chosen either fully diagonalize the least squares matrix ie. decomposing the 3D vector space R3 as R3 = e1 + e2 + e3, where e1, e2and e3are the eigenvectors of A (w) or the weights make the chosen direction the eigendirection ie. decomposing the 3D vector space R3 as R3 = e1 + ( 2-D vector space R2). The positive weights not only prevent the denominator of the 1-D type LS formulae from going to zero, but also preserve the LED property of the least squares method. The WLSKUM-ED has been successfully applied to a large number of 2-D and 3-D test cases in various flow regimes for a variety of point distributions ranging from a simple cloud generated from a structured grid generator (shock reflection problem in 2-D and the supersonic flow past hemisphere in 3-D) to the multiple chimera clouds generated from multiple overlapping meshes (BI-NACA test case in 2-D and FAME cloud for M165 configuration in 3-D) thus demonstrating the robustness of the WLSKUM-ED solver. It must be noted that the second order acccurate computations using this method have been performed without the use of the limiters in all the flow regimes. No spurious oscillations and wiggles in the captured shocks have been observed, indicating the preservation of the LED property of the method even for 2ndorder accurate computations. The convergence acceleration of the WLSKUM-ED code has been achieved by the use of LUSGS method. The use of 1-D type formulae has simplified the application of LUSGS method in the grid-free framework. The advantage of the LUSGS method is that the evaluation and storage of the jacobian matrices can be eliminated by approximating the split flux jacobians in the implicit operator itself. Numerical results reveal the attainment of a speed up of four by using the LUSGS method as compared to the explicit time marching method. The 2-D WLSKUM-ED code has also been used to perform the internal flow computations. The internal flows are the flows which are confined within the boundaries. The inflow and the outflow boundaries have a significant effect on these flows. The accurate treatment of these boundary conditions is essential particularly if the flow condition at the outflow boundary is subsonic or transonic. The Kinetic Periodic Boundary Condition (KPBC) which has been developed to enable the single-passage (SP) flow computations to be performed in place of the multi-passage (MP) flow computations, utilizes the moment method strategy. The state update formula for the points at the periodic boundaries is identical to the state update formula for the interior points and can be easily extended to second order accuracy like the interior points. Numerical results have shown the successful reproduction of the MP flow computation results using the SP flow computations by the use of KPBC. The inflow and the outflow boundary conditions at the respective boundaries have been enforced by the use of Kinetic Outer Boundary Condition (KOBC). These boundary conditions have been validated by performing the flow computations for the 3rdtest case of the 4thstandard blade configuration of the turbine blade. The numerical results show a good comparison with the experimental results.
575

Efeitos de demanda e de oferta na estrutura de capital de companhias abertas no Brasil

Campos, Anderson Luis Saber 11 September 2007 (has links)
Made available in DSpace on 2016-03-15T19:26:30Z (GMT). No. of bitstreams: 1 Anderson Luis Saber Campos.pdf: 1013366 bytes, checksum: 2e9d466c92f8d404ca44d38a91e676e8 (MD5) Previous issue date: 2007-09-11 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / From the theory on structure of capital and the application of structural equations a model was considered to evaluate the indebtedness of the public companies in Brazil. The effect of direct and indirect costs of bankruptcy, tax benefits, agency costs of free cash flow and the agency costs of borrowing. Computed the results, which was opted to analyzing an alternative model that indicates the relevance of the capital demand and offers effects in the level of companies indebtedness. One met evidences on the relevance of direct and indirect costs of bankruptcy and agency costs of borrowing in the determination of the capital structure in analyzed companies. / A partir da teoria sobre estrutura de capital e da aplicação de equações estruturais foi proposto um modelo para avaliar o endividamento das companhias abertas no Brasil. Foram considerados os efeitos das dificuldades financeiras, benefícios fiscais, agency de capital próprio e de capital de terceiros. Computado os resultados optou-se por analisar um modelo alternativo segundo o qual encontramos indícios e relevância dos efeitos da demanda e oferta de capital no nível de endividamento das empresas. Encontrou-se evidências que dificuldades financeiras e agency de capital de terceiros influem na determinação da estrutura de capital das empresas analisadas.
576

Lastbalanseringsalgoritmer : En utvärdering av lastbalanseringsalgoritmer i ett LVS-kluster där noderna har olika operativsystem

Brissman, Alexander, Brissman, Joachim January 2012 (has links)
Denna rapport behandlar en undersökning av olika lastbalanseringsalgoritmer i Linux Virtual Server. Undersökningen har gjorts i ett webbkluster (Apache var webbservern som användes) med tre heterogena noder, där operativsystemet var den detalj som skiljde dem åt. Operativsystemen som ingick i undersökningen var Windows Server 2008 R2, CentOS 6.2 och FreeBSD 9.0. De faktorer som undersöktes mellan de olika algoritmerna var klustrets genomsnittliga svarstid vid olika belastning och hur många anslutningar som kunde hanteras av klustret, detta gjordes med verktyget httperf. Undersökningen ger svar på hur ett heterogent webbklusters genomsnittligasvarstid och arbetskapacitet kan skilja sig åt beroende på vilken algoritm som används för lastbalansering. Resultatet visar att den genomsnittliga svarstiden håller sig låg tills en hastig stigning inträffar. Shortest Expected Delay och Weighted Least-Connection Scheduling kunde hantera störst antal anslutningar. / This report covers an investigation of different load balancing algorithms in Linux Virtual Server. The investigation was done in a web cluster (with Apache as the software being used) consisting of three heterogeneous nodes, where the operating system was the detail that differentiated the nodes. The operating systems that were used in the investigation were Windows Server 2008 R2, CentOS 6.2 and FreeBSD 9.0. The factors examined were average response time at different load and how many connections the cluster could cope with, these factors were examined by measurements taken with the tool httperf. The investigation gives an answer to how a heterogeneous web clustersaverage response time and working capacity can be affected by the choice of load balancing algorithm. The result shows that the average response time stays low until a sudden rise occurs. Shortest Expected Delay and Weighted Least-Connection Scheduling could handle the largest number of connections.
577

Multivariate analysis of high-throughput sequencing data / Analyses multivariées de données de séquençage à haut débit

Durif, Ghislain 13 December 2016 (has links)
L'analyse statistique de données de séquençage à haut débit (NGS) pose des questions computationnelles concernant la modélisation et l'inférence, en particulier à cause de la grande dimension des données. Le travail de recherche dans ce manuscrit porte sur des méthodes de réductions de dimension hybrides, basées sur des approches de compression (représentation dans un espace de faible dimension) et de sélection de variables. Des développements sont menés concernant la régression "Partial Least Squares" parcimonieuse (supervisée) et les méthodes de factorisation parcimonieuse de matrices (non supervisée). Dans les deux cas, notre objectif sera la reconstruction et la visualisation des données. Nous présenterons une nouvelle approche de type PLS parcimonieuse, basée sur une pénalité adaptative, pour la régression logistique. Cette approche sera utilisée pour des problèmes de prédiction (devenir de patients ou type cellulaire) à partir de l'expression des gènes. La principale problématique sera de prendre en compte la réponse pour écarter les variables non pertinentes. Nous mettrons en avant le lien entre la construction des algorithmes et la fiabilité des résultats.Dans une seconde partie, motivés par des questions relatives à l'analyse de données "single-cell", nous proposons une approche probabiliste pour la factorisation de matrices de comptage, laquelle prend en compte la sur-dispersion et l'amplification des zéros (caractéristiques des données single-cell). Nous développerons une procédure d'estimation basée sur l'inférence variationnelle. Nous introduirons également une procédure de sélection de variables probabiliste basée sur un modèle "spike-and-slab". L'intérêt de notre méthode pour la reconstruction, la visualisation et le clustering de données sera illustré par des simulations et par des résultats préliminaires concernant une analyse de données "single-cell". Toutes les méthodes proposées sont implémentées dans deux packages R: plsgenomics et CMF / The statistical analysis of Next-Generation Sequencing data raises many computational challenges regarding modeling and inference, especially because of the high dimensionality of genomic data. The research work in this manuscript concerns hybrid dimension reduction methods that rely on both compression (representation of the data into a lower dimensional space) and variable selection. Developments are made concerning: the sparse Partial Least Squares (PLS) regression framework for supervised classification, and the sparse matrix factorization framework for unsupervised exploration. In both situations, our main purpose will be to focus on the reconstruction and visualization of the data. First, we will present a new sparse PLS approach, based on an adaptive sparsity-inducing penalty, that is suitable for logistic regression to predict the label of a discrete outcome. For instance, such a method will be used for prediction (fate of patients or specific type of unidentified single cells) based on gene expression profiles. The main issue in such framework is to account for the response to discard irrelevant variables. We will highlight the direct link between the derivation of the algorithms and the reliability of the results. Then, motivated by questions regarding single-cell data analysis, we propose a flexible model-based approach for the factorization of count matrices, that accounts for over-dispersion as well as zero-inflation (both characteristic of single-cell data), for which we derive an estimation procedure based on variational inference. In this scheme, we consider probabilistic variable selection based on a spike-and-slab model suitable for count data. The interest of our procedure for data reconstruction, visualization and clustering will be illustrated by simulation experiments and by preliminary results on single-cell data analysis. All proposed methods were implemented into two R-packages "plsgenomics" and "CMF" based on high performance computing
578

基於最小一乘法的室外WiFi匹配定位之研究 / Study on Outdoor WiFi Matching Positioning Based on Least Absolute Deviation

林子添 Unknown Date (has links)
隨著WiFi訊號在都市的涵蓋率逐漸普及,基於WiFi訊號強度值的定位方法逐漸發展。WiFi匹配定位(Matching Positioning)是透過參考點坐標與WiFi訊號強度(Received Signal Strength Indicator, RSSI)的蒐集,以最小二乘法(Least Squares, LS)計算RSSI模型參數;然後,利用模型參數與使用者位置的WiFi訊號強度,推估出使用者的位置。然而WiFi訊號強度容易受到環境因素影響,例如降雨、建物遮蔽、人群擾動等因素,皆會使訊號強度降低,若以受影響的訊號強度進行定位,將使定位成果與真實位置產生偏移。 為了降低訊號強度的錯誤造成定位結果的誤差,本研究嘗試透過具有穩健性的最小一乘法( Least Absolute Deviation, LAD)結合WiFi匹配定位,去克服WiFi訊號易受環境影響的特性,期以獲得較精確的WiFi定位成果。研究首先透過模擬資料的建立,測試不同粗差狀況最小一乘法WiFi匹配定位之表現,最後再以真實WiFi訊號進行匹配定位的演算,並比較最小一乘法WiFi匹配定位與最小二乘法WiFi匹配定位的成果差異,探討二種方法的特性。 根據本研究成果顯示,於模擬資料中,最小一乘法WiFi匹配定位相較於最小二乘法WiFi匹配定位,在面對參考點接收的AP訊號與檢核點接收的AP訊號強度含有粗差的情形皆能有較好的穩健性,且在參考點接收的AP訊號含有粗差的情況有良好的偵錯能力。而於真實環境之下,最小一乘法WiFi匹配定位之精度也較最小二乘法WiFi匹配定位具有穩健性;在室外資料的部份,最小一乘法WiFi匹配定位之精度為8.46公尺,最小二乘法WiFi匹配定位之精度為8.57公尺。在室內資料的部份,最小一乘法WiFi匹配定位之精度為2.20公尺,最小二乘法WiFi匹配定位之精度為2.41公尺。 / Because of the extensive coverage of WiFi signal, the positioning methods by the WiFi signal are proposed. WiFi Matching Positioning is a method of WiFi positioning. By collecting the WiFi signal strength and coordiates of reference points to calculate the signal strength transformation parameters, then, user’s location can be calculated with the LS (Least Squares). However, the WiFi signal strength is easily degraded by the environment. Using the degraded WiFi signal to positioning will produce wrong coordinates. Hence this research tries to use the robustness of LAD (Least Absolute Deviation) combining with WiFi Matching Positioning to overcome the sensibility of WiFi signal strength, expecting to make the result of WiFi positioning more reliable. At first, in order to test the ability of LAD, this research uses simulating data to add different kind of outliers in the database, and checks the performance of LAD WiFi Matching Positioning. Finally, this research uses real data to compare the difference between the results of LAD and LS WiFi Matching Positioning. In the simulating data, the test result shows that LAD WiFi Matching Positioning can not only have better robust ability to deal with the reference and check points AP signal strength error than LS WiFi Matching Positioning but also can detect the outlier in the reference points AP signal strength. In the real data, LAD WiFi Matching Positioning can also have better result. In the outdoor situation, the RMSE (Root Mean Square Error) of LAD WiFi Matching Positioning and LS (Least Squares) WiFi Matching Positioning are 8.46 meters and 8.57 meters respectively. In the indoor situation, the RMSE (Root Mean Square Error) of LAD WiFi Matching Positioning and LS (Least Squares) WiFi Matching Positioning are 2.20 meters and 2.41 meters respectively.
579

Kernel LMS à noyau gaussien : conception, analyse et applications à divers contextes / Gaussian kernel least-mean-square : design, analysis and applications

Gao, Wei 09 December 2015 (has links)
L’objectif principal de cette thèse est de décliner et d’analyser l’algorithme kernel-LMS à noyau Gaussien dans trois cadres différents: celui des noyaux uniques et multiples, à valeurs réelles et à valeurs complexes, dans un contexte d’apprentissage distributé et coopératif dans les réseaux de capteurs. Plus précisement, ce travail s’intéresse à l’analyse du comportement en moyenne et en erreur quadratique de cas différents types d’algorithmes LMS à noyau. Les modèles analytiques de convergence obtenus sont validés par des simulations numérique. Tout d’abord, nous introduisons l’algorithme LMS, les espaces de Hilbert à noyau reproduisants, ainsi que les algorithmes de filtrage adaptatif à noyau existants. Puis, nous étudions analytiquement le comportement de l’algorithme LMS à noyau Gaussien dans le cas où les statistiques des éléments du dictionnaire ne répondent que partiellement aux statistiques des données d’entrée. Nous introduisons ensuite un algorithme LMS modifié à noyau basé sur une approche proximale. La stabilité de l’algorithme est également discutée. Ensuite, nous introduisons deux types d’algorithmes LMS à noyaux multiples. Nous nous concentrons en particulier sur l’analyse de convergence de l’un d’eux. Plus généralement, les caractéristiques des deux algorithmes LMS à noyaux multiples sont analysées théoriquement et confirmées par les simulations. L’algorithme LMS à noyau complexe augmenté est présenté et ses performances analysées. Enfin, nous proposons des stratégies de diffusion fonctionnelles dans les espaces de Hilbert à noyau reproduisant. La stabilité́ de cas de l’algorithme est étudiée. / The main objective of this thesis is to derive and analyze the Gaussian kernel least-mean-square (LMS) algorithm within three frameworks involving single and multiple kernels, real-valued and complex-valued, non-cooperative and cooperative distributed learning over networks. This work focuses on the stochastic behavior analysis of these kernel LMS algorithms in the mean and mean-square error sense. All the analyses are validated by numerical simulations. First, we review the basic LMS algorithm, reproducing kernel Hilbert space (RKHS), framework and state-of-the-art kernel adaptive filtering algorithms. Then, we study the convergence behavior of the Gaussian kernel LMS in the case where the statistics of the elements of the so-called dictionary only partially match the statistics of the input data. We introduced a modified kernel LMS algorithm based on forward-backward splitting to deal with $\ell_1$-norm regularization. The stability of the proposed algorithm is then discussed. After a review of two families of multikernel LMS algorithms, we focus on the convergence behavior of the multiple-input multikernel LMS algorithm. More generally, the characteristics of multikernel LMS algorithms are analyzed theoretically and confirmed by simulation results. Next, the augmented complex kernel LMS algorithm is introduced based on the framework of complex multikernel adaptive filtering. Then, we analyze the convergence behavior of algorithm in the mean-square error sense. Finally, in order to cope with the distributed estimation problems over networks, we derive functional diffusion strategies in RKHS. The stability of the algorithm in the mean sense is analyzed.
580

Robust Least Squares Kinetic Upwind Method For Inviscid Compressible Flows

Ghosh, Ashis Kumar 06 1900 (has links) (PDF)
No description available.

Page generated in 0.0386 seconds