• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 530
  • 232
  • 68
  • 48
  • 28
  • 25
  • 20
  • 17
  • 13
  • 12
  • 8
  • 8
  • 7
  • 7
  • 7
  • Tagged with
  • 1178
  • 1032
  • 202
  • 193
  • 173
  • 161
  • 155
  • 147
  • 123
  • 121
  • 106
  • 96
  • 90
  • 84
  • 81
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
941

Estimação de estado: a interpretação geométrica aplicada ao processamento de erros grosseiros em medidas / Study of systems with optical orthogonal multicarrier and consistent

Carvalho, Breno Elias Bretas de 22 March 2013 (has links)
Este trabalho foi proposto com o objetivo de implementar um programa computacional para estimar os estados (tensões complexas nodais) de um sistema elétrico de potência (SEP) e aplicar métodos alternativos para o processamento de erros grosseiros (EGs), baseados na interpretação geométrica dos erros e no conceito de inovação das medidas. Através da interpretação geométrica, BRETAS et al. (2009), BRETAS; PIERETI (2010), BRETAS; BRETAS; PIERETI (2011) e BRETAS et al. (2013) demonstraram matematicamente que o erro da medida se compõe de componentes detectáveis e não detectáveis, e ainda que a componente detectável do erro é exatamente o resíduo da medida. As metodologias até então utilizadas, para o processamento de EGs, consideram apenas a componente detectável do erro, e como consequência, podem falhar. Na tentativa de contornar essa limitação, e baseadas nos trabalhos citados previamente, foram estudadas e implementadas duas metodologias alternativas para processar as medidas portadoras de EGs. A primeira, é baseada na análise direta das componentes dos erros das medidas; a segunda, de forma similar às metodologias tradicionais, é baseada na análise dos resíduos das medidas. Entretanto, o diferencial da segunda metodologia proposta reside no fato de não considerarmos um valor limiar fixo para a detecção de medidas com EGs. Neste caso, adotamos um novo valor limiar (TV, do inglês: Threshold Value), característico de cada medida, como apresentado no trabalho de PIERETI (2011). Além disso, com o intuito de reforçar essa teoria, é proposta uma forma alternativa para o cálculo destes valores limiares, através da análise da geometria da função densidade de probabilidade da distribuição normal multivariável, referente aos resíduos das medidas. / This work was proposed with the objective of implementing a computer program to estimate the states (complex nodal voltages) in an electrical power system (EPS) and apply alternative methods for processing gross errors (GEs), based on the geometrical interpretation of the measurements errors and the innovation concept. Through the geometrical interpretation, BRETAS et al. (2009), BRETAS; PIERETI (2010), BRETAS; BRETAS; PIERETI (2011) and BRETAS et al. (2013) proved mathematically that the measurement error is composed of detectable and undetectable components, and also showed that the detectable component of the error is exactly the residual of the measurement. The methods hitherto used, for processing GEs, consider only the detectable component of the error, then as a consequence, may fail. In an attempt to overcome this limitation, and based on the works cited previously, were studied and implemented two alternative methodologies for process measurements with GEs. The first one is based on the direct analysis of the components of the errors of the measurements, the second one, in a similar way to the traditional methods, is based on the analysis of the measurements residuals. However, the differential of the second proposed methodology lies in the fact that it doesn\'t consider a fixed threshold value for detecting measurements with GEs. In this case, we adopted a new threshold value (TV ) characteristic of each measurement, as presented in the work of PIERETI (2011). Furthermore, in order to reinforce this theory, we propose an alternative way to calculate these thresholds, by analyzing the geometry of the probability density function of the multivariate normal distribution, relating to the measurements residuals.
942

Sur un problème inverse en pressage de matériaux biologiques à structure cellulaire / On an inverse problem in pressing of biological materials with cellular structure

Ahmed Bacha, Rekia Meriem 19 October 2018 (has links)
Cette thèse, proposée dans le cadre du projet W2P1-DECOL (SAS PIVERT), financée par le ministère de l’enseignement supérieur est consacrée à l’étude d’un problème inverse de pressage des matériaux biologiques à structure cellulaire. Le but est d’identifier connaissant les mesures du flux d’huile sortant, le coefficient de consolidation du gâteau de pressage et l’inverse du temps caractéristique de consolidation sur deux niveaux : au niveau de la graine de colza et au niveau du gâteau de pressage. Dans un premier temps, nous présentons un système d’équations paraboliques modélisant le problème de pressage des matériaux biologiques à structure cellulaire, il découle de l’équation de continuité de la loi de Darcy et d’autres hypothèses simplificatrices. Puis l’analyse théorique et numérique du modèle direct est faite dans le cas linéaire. Enfin la méthode des différences finies est utilisée pour le discrétiser. Dans un second temps, nous introduisons le problème inverse du pressage où l’étude de l’identifiabilité de ce problème est résolue par une méthode spectrale. Par la suite, nous nous intéressons à l’étude de stabilité lipschitzienne locale et globale. De plus une estimation de stabilité lipschitzienne globale, pour le problème inverse de paramètres, dans le cas du système d’équations paraboliques, à partir des mesures sur ]0,T[ est établie. Enfin l’identification des paramètres est résolue par deux méthodes, l’une basée sur l’adaptation de la méthode algébrique et l’autre formulée comme la minimisation au sens des moindres carrés d’une fonctionnelle évaluant l’écart entre les mesures et les résultats du modèle direct, la résolution de ce problème inverse se fait en utilisant un algorithme itératif BFGS, l’algorithme est validé puis testé numériquement dans le cas des graines de colza, en utilisant des mesures synthétiques. Il donne des résultats très satisfaisants, malgré les difficultés rencontrés à manipuler et exploiter les données expérimentales. / This thesis, proposed in the framework of the W2P1-DECOL project (SAS PIVERT) and funded by the Ministry of Higher Education, is devoted to the study an inverse problem of pressing biological materials with a cellular structure. The aim is to identify, of the outgoing oil flow, the coefficient of consolidation of the pressing cake and the inverse of the characteristic time of consolidation on two levels : at the level of the rapeseed and at the level of the pressing cake. First, we present a system of parabolic equations modeling the pressing problem of biological materials with cellular structure; it follows from the continuity equation of Darcy’s law and other simplifying hypotheses. Then a theoretical and numerical analysis of a direct model is made in the linear case. Finally the finite difference method is usedt o discretize it. In a second step, we introduce the inverse problem of the pressing where the study of the identifiability of this problem is solved by a spectral method. Later we are interested in the study of local and global Lipschitizian stability. Moreover, global Lipschitz stability estimate for the inverse problem of parameters in the case of the system of parabolic equations from the measures on ]0,T[ is established. Finally, the identification of the parameters is solved by two methods; one based on the adaptation of the algebraic method and the other formulated as the minimization in the least squares sense of a functional evaluating the difference between measurements and the results of the direct model; the resolution of this inverse problem is done using an iterative algorithm BFGS, the algorithm is validated and then tested numerically in the case of rapeseeds, using synthetic measures. It gives very satisfactory results, despite the difficulties encountered in handling and exploiting the experimental data.
943

Méthodes isogéométriques pour les équations aux dérivées partielles hyperboliques / Isogeometric methods for hyperbolic partial differential equations

Gdhami, Asma 17 December 2018 (has links)
L’Analyse isogéométrique (AIG) est une méthode innovante de résolution numérique des équations différentielles, proposée à l’origine par Thomas Hughes, Austin Cottrell et Yuri Bazilevs en 2005. Cette technique de discrétisation est une généralisation de l’analyse par éléments finis classiques (AEF), conçue pour intégrer la conception assistée par ordinateur (CAO), afin de combler l’écart entre la description géométrique et l’analyse des problèmes d’ingénierie. Ceci est réalisé en utilisant des B-splines ou des B-splines rationnelles non uniformes (NURBS), pour la description des géométries ainsi que pour la représentation de champs de solutions inconnus.L’objet de cette thèse est d’étudier la méthode isogéométrique dans le contexte des problèmes hyperboliques en utilisant les fonctions B-splines comme fonctions de base. Nous proposons également une méthode combinant l’AIG avec la méthode de Galerkin discontinue (GD) pour résoudre les problèmes hyperboliques. Plus précisément, la méthodologie de GD est adoptée à travers les interfaces de patches, tandis que l’AIG traditionnelle est utilisée dans chaque patch. Notre méthode tire parti de la méthode de l’AIG et la méthode de GD.Les résultats numériques sont présentés jusqu’à l’ordre polynomial p= 4 à la fois pour une méthode deGalerkin continue et discontinue. Ces résultats numériques sont comparés pour un ensemble de problèmes de complexité croissante en 1D et 2D. / Isogeometric Analysis (IGA) is a modern strategy for numerical solution of partial differential equations, originally proposed by Thomas Hughes, Austin Cottrell and Yuri Bazilevs in 2005. This discretization technique is a generalization of classical finite element analysis (FEA), designed to integrate Computer Aided Design (CAD) and FEA, to close the gap between the geometrical description and the analysis of engineering problems. This is achieved by using B-splines or non-uniform rational B-splines (NURBS), for the description of geometries as well as for the representation of unknown solution fields.The purpose of this thesis is to study isogeometric methods in the context of hyperbolic problems usingB-splines as basis functions. We also propose a method that combines IGA with the discontinuous Galerkin(DG)method for solving hyperbolic problems. More precisely, DG methodology is adopted across the patchinterfaces, while the traditional IGA is employed within each patch. The proposed method takes advantageof both IGA and the DG method.Numerical results are presented up to polynomial order p= 4 both for a continuous and discontinuousGalerkin method. These numerical results are compared for a range of problems of increasing complexity,in 1D and 2D.
944

股價指數報酬率厚尾程度之研究

李佳晏 Unknown Date (has links)
許多觀察到的時間序列資料,多呈現高峰厚尾(leptokurtic)的現象,本文引用時間序列資料為Paretian分配之假設,估計各個國家股價指數報酬率於不同頻率資料下之最大級數動差,以觀察其厚尾程度。實證結果發現,各個國家指數報酬率於不同頻率資料下之四級以上動差大部分存在,且不隨資料之頻率不同,而有不同的表現。由此可推論,各個國家股價指數報酬率之歷史分配,其離群值之活動並不嚴重。接著,利用樣本分割預測檢定(Sample Split Prediction Test)來檢定所觀察各個國家股價指數報酬率於同一樣本期間內,其左右尾之厚尾程度是否一致,及檢定所觀察各個國家指數報酬率於跨期間左尾或右尾之厚尾程度是否穩定。在同一樣本期間,檢定時間序列之左右尾之厚尾程度是否一致之檢定中,發現各個國家指數報酬率在所觀察樣本期間內,其左右尾之厚尾程度大致相同;而在跨期間之樣本分割預測檢定中,發現各個國家指數報酬率在像是1987年10月美國股市大崩盤、1990年至1991年間之波斯灣戰爭、1997年亞洲金融風暴等事件前後,其左(右)尾之厚尾程度有顯著差異。最後提出Cusum of Squares檢定,係用於檢定一時間序列資料在所觀察之樣本期間內,其非條件變異數是否為一常數。 Cusum of Squares檢定之檢定結果顯示,本文之各個國家指數報酬率在所觀察之樣本期間內,其非條件變異數並非為一常數。進一步觀察各個國家指數報酬率之Cusum of Squares圖,並綜合前述跨期間樣本分割預測檢定之結果,可推論在處理較長樣本期間之時間序列資料可能遇到結構性變動之情況時,跨期間之樣本分割預測檢定及Cusum of Squares檢定可提供結構性變動可能發生之時點。
945

粒子群最佳化演算法於估測基礎矩陣之應用 / Particle swarm optimization algorithms for fundamental matrix estimation

劉恭良, Liu, Kung Liang Unknown Date (has links)
基礎矩陣在影像處理是非常重要的參數,舉凡不同影像間對應點之計算、座標系統轉換、乃至重建物體三維模型等問題,都有賴於基礎矩陣之精確與否。本論文中,我們提出一個機制,透過粒子群最佳化的觀念來求取基礎矩陣,我們的方法不但能提高基礎矩陣的精確度,同時能降低計算成本。 我們從多視角影像出發,以SIFT取得大量對應點資料後,從中選取8點進行粒子群最佳化。取樣時,我們透過分群與隨機挑選以避免選取共平面之點。然後利用最小平方中值表來估算初始評估值,並遵循粒子群最佳化演算法,以最小疊代次數為收斂準則,計算出最佳之基礎矩陣。 實作中我們以不同的物體模型為標的,以粒子群最佳化與最小平方中值法兩者結果比較。實驗結果顯示,疊代次數相同的實驗,粒子群最佳化演算法估測基礎矩陣所需的時間,約為最小平方中值法來估測所需時間的八分之一,同時粒子群最佳化演算法估測出來的基礎矩陣之平均誤差值也優於最小平方中值法所估測出來的結果。 / Fundamental matrix is a very important parameter in image processing. In corresponding point determination, coordinate system conversion, as well as three-dimensional model reconstruction, etc., fundamental matrix always plays an important role. Hence, obtaining an accurate fundamental matrix becomes one of the most important issues in image processing. In this paper, we present a mechanism that uses the concept of Particle Swarm Optimization (PSO) to find fundamental matrix. Our approach not only can improve the accuracy of the fundamental matrix but also can reduce computation costs. After using Scale-Invariant Feature Transform (SIFT) to get a large number of corresponding points from the multi-view images, we choose a set of eight corresponding points, based on the image resolutions, grouping principles, together with random sampling, as our initial starting points for PSO. Least Median of Squares (LMedS) is used in estimating the initial fitness value as well as the minimal number of iterations in PSO. The fundamental matrix can then be computed using the PSO algorithm. We use different objects to illustrate our mechanism and compare the results obtained by using PSO and using LMedS. The experimental results show that, if we use the same number of iterations in the experiments, the fundamental matrix computed by the PSO method have better estimated average error than that computed by the LMedS method. Also, the PSO method takes about one-eighth of the time required for the LMedS method in these computations.
946

Optimisation of Active Microstrip Patch Antennas

Jacmenovic, Dennis, dennis_jacman@yahoo.com.au January 2004 (has links)
This thesis presents a study of impedance optimisation of active microstrip patch antennas to multiple frequency points. A single layered aperture coupled microstrip patch antenna has been optimised to match the source reflection coefficient of a transistor in designing an active antenna. The active aperture coupled microstrip patch antenna was optimised to satisfy Global Positioning System (GPS) frequency specifications. A rudimentary aperture coupled microstrip patch antenna consists of a rectangular antenna element etched on the top surface of two dielectric substrates. The substrates are separated by a ground plane and a microstrip feed is etched on the bottom surface. A rectangular aperture in the ground plane provides coupling between the feed and the antenna element. This type of antenna, which conveniently isolates any circuit at the feed from the antenna element, is suitable for integrated circuit design and is simple to fabricate. An active antenna design directly couples an antenna to an active device, therefore saving real estate and power. This thesis focuses on designing an aperture coupled patch antenna directly coupled to a low noise amplifier as part of the front end of a GPS receiver. In this work an in-house software package, dubbed ACP by its creator Dr Rod Waterhouse, for calculating aperture coupled microstrip patch antenna performance parameters was linked to HP-EEsof, a microwave computer aided design and simulation package by Hewlett-Packard. An ANSI C module in HP-EEsof was written to bind the two packages. This process affords the client the benefit of powerful analysis tools offered in HP-EEsof and the fast analysis of ACP for seamless system design. Moreover, the optimisation algorithms in HP-EEsof were employed to investigate which algorithms are best suited for optimising patch antennas. The active antenna design presented in this study evades an input matching network, which is accomplished by designing the antenna to represent the desired source termination of a transistor. It has been demonstrated that a dual-band microstrip patch antenna can be successfully designed to match the source reflection coefficient, avoiding the need to insert a matching network. Maximum power transfer in electrical circuits is accomplished by matching the impedance between entities, which is generally acheived with the use of a matching network. Passive matching networks employed in amplifier design generally consist of discrete components up to the low GHz frequency range or distributed elements at greater frequencies. The source termination for a low noise amplifier will greatly influence its noise, gain and linearity which is controlled by designing a suitable input matching network. Ten diverse search methods offered in HP-EEsof were used to optimise an active aperture coupled microstrip patch antenna. This study has shown that the algorithms based on the randomised search techniques and the Genetic algorithm provide the most robust performance. The optimisation results were used to design an active dual-band antenna.
947

Itération sur les Politiques Optimiste et Apprentissage du Jeu de Tetris

Thiery, Christophe 25 November 2010 (has links) (PDF)
Cette thèse s'intéresse aux méthodes d'itération sur les politiques dans l'apprentissage par renforcement à grand espace d'états avec approximation linéaire de la fonction de valeur. Nous proposons d'abord une unification des principaux algorithmes du contrôle optimal stochastique. Nous montrons la convergence de cette version unifiée vers la fonction de valeur optimale dans le cas tabulaire, ainsi qu'une garantie de performances dans le cas où la fonction de valeur est estimée de façon approximative. Nous étendons ensuite l'état de l'art des algorithmes d'approximation linéaire du second ordre en proposant une généralisation de Least-Squares Policy Iteration (LSPI) (Lagoudakis et Parr, 2003). Notre nouvel algorithme, Least-Squares λ Policy Iteration (LSλPI), ajoute à LSPI un concept venant de λ-Policy Iteration (Bertsekas et Ioffe, 1996) : l'évaluation amortie (ou optimiste) de la fonction de valeur, qui permet de réduire la variance de l'estimation afin d'améliorer l'efficacité de l'échantillonnage. LSλPI propose ainsi un compromis biais-variance réglable qui peut permettre d'améliorer l'estimation de la fonction de valeur et la qualité de la politique obtenue. Dans un second temps, nous nous intéressons en détail au jeu de Tetris, une application sur laquelle se sont penchés plusieurs travaux de la littérature. Tetris est un problème difficile en raison de sa structure et de son grand espace d'états. Nous proposons pour la première fois une revue complète de la littérature qui regroupe des travaux d'apprentissage par renforcement, mais aussi des techniques de type évolutionnaire qui explorent directement l'espace des politiques et des algorithmes réglés à la main. Nous constatons que les approches d'apprentissage par renforcement sont à l'heure actuelle moins performantes sur ce problème que des techniques de recherche directe de la politique telles que la méthode d'entropie croisée (Szita et Lőrincz, 2006). Nous expliquons enfin comment nous avons mis au point un joueur de Tetris qui dépasse les performances des meilleurs algorithmes connus jusqu'ici et avec lequel nous avons remporté l'épreuve de Tetris de la Reinforcement Learning Competition 2008.
948

Restauration et séparation de signaux polynomiaux par morceaux. Application à la microscopie de force atomique

Duan, Junbo 15 November 2010 (has links) (PDF)
Cette thèse s'inscrit dans le domaine des problèmes inverses en traitement du signal. Elle est consacrée à la conception d'algorithmes de restauration et de séparation de signaux parcimonieux et à leur application à l'approximation de courbes de forces en microscopie de force atomique (AFM), où la notion de parcimonie est liée au nombre de points de discontinuité dans le signal (sauts, changements de pente, changements de courbure). Du point de vue méthodologique, des algorithmes sous-optimaux sont proposés pour le problème de l'approximation parcimonieuse basée sur la pseudo-norme ℓ0 : l'algorithme Single Best Replacement (SBR) est un algorithme itératif de type « ajout-retrait » inspiré d'algorithmes existants pour la restauration de signaux Bernoulli-Gaussiens. L'algorithme Continuation Single Best Replacement (CSBR) est un algorithme permettant de fournir des approximations à des degrés de parcimonie variables. Nous proposons aussi un algorithme de séparation de sources parcimonieuses à partir de mélanges avec retards, basé sur l'application préalable de l'algorithme CSBR sur chacun des mélanges, puis sur une procédure d'appariement des pics présents dans les différents mélanges. La microscopie de force atomique est une technologie récente permettant de mesurer des forces d'interaction entre nano-objets. L'analyse de courbes de forces repose sur des modèles paramétriques par morceaux. Nous proposons un algorithme permettant de détecter les régions d'intérêt (les morceaux) où chaque modèle s'applique puis d'estimer par moindres carrés les paramètres physiques (élasticité, force d'adhésion, topographie, etc.) dans chaque région. Nous proposons finalement une autre approche qui modélise une courbe de force comme un mélange de signaux sources parcimonieux retardées. La recherche des signaux sources dans une image force-volume s'effectue à partir d'un grand nombre de mélanges car il y autant de mélanges que de pixels dans l'image.
949

Multivariate non-invasive measurements of skin disorders

Nyström, Josefina January 2006 (has links)
<p>The present thesis proposes new methods for obtaining objective and accurate diagnoses in modern healthcare. Non-invasive techniques have been used to examine or diagnose three different medical conditions, namely neuropathy among diabetics, radiotherapy induced erythema (skin redness) among breast cancer patients and diagnoses of cutaneous malignant melanoma. The techniques used were Near-InfraRed spectroscopy (NIR), Multi Frequency Bio Impedance Analysis of whole body (MFBIA-body), Laser Doppler Imaging (LDI) and Digital Colour Photography (DCP).</p><p>The neuropathy for diabetics was studied in papers I and II. The first study was performed on diabetics and control subjects of both genders. A separation was seen between males and females and therefore the data had to be divided in order to obtain good models. NIR spectroscopy was shown to be a viable technique for measuring neuropathy once the division according to gender was made. The second study on diabetics, where MFBIA-body was added to the analysis, was performed on males exclusively. Principal component analysis showed that healthy reference subjects tend to separate from diabetics. Also, diabetics with severe neuropathy separate from persons less affected.</p><p>The preliminary study presented in paper III was performed on breast cancer patients in order to investigate if NIR, LDI and DCP were able to detect radiotherapy induced erythema. The promising results in the preliminary study motivated a new and larger study. This study, presented in papers IV and V, intended to investigate the measurement techniques further but also to examine the effect that two different skin lotions, Essex and Aloe vera have on the development of erythema. The Wilcoxon signed rank sum test showed that DCP and NIR could detect erythema, which is developed during one week of radiation treatment. LDI was able to detect erythema developed during two weeks of treatment. None of the techniques could detect any differences between the two lotions regarding the development of erythema.</p><p>The use of NIR to diagnose cutaneous malignant melanoma is presented as unpublished results in this thesis. This study gave promising but inconclusive results. NIR could be of interest for future development of instrumentation for diagnosis of skin cancer.</p>
950

3D imaging using time-correlated single photon counting

Neimert-Andersson, Thomas January 2010 (has links)
<p>This project investigates a laser radar system. The system is based on the principles of time-correlated single photon counting, and by measuring the times-of-flight of reflected photons it can find range profiles and perform three-dimensional imaging of scenes. Because of the photon counting technique the resolution and precision that the system can achieve is very high compared to analog systems. These properties make the system interesting for many military applications. For example, the system can be used to interrogate non-cooperative targets at a safe distance in order to gather intelligence. However, signal processing is needed in order to extract the information from the data acquired by the system. This project focuses on the analysis of different signal processing methods.</p><p>The Wiener filter and the Richardson-Lucy algorithm are used to deconvolve the data acquired by the photon counting system. In order to find the positions of potential targets different approaches of non-linear least squares methods are tested, as well as a more unconventional method called ESPRIT. The methods are evaluated based on their ability to resolve two targets separated by some known distance and the accuracy with which they calculate the position of a single target, as well as their robustness to noise and their computational burden.</p><p>Results show that fitting a curve made of a linear combination of asymmetric super-Gaussians to the data by a method of non-linear least squares manages to accurately resolve targets separated by 1.75 cm, which is the best result of all the methods tested. The accuracy for finding the position of a single target is similar between the methods but ESPRIT has a much faster computation time.</p>

Page generated in 0.0351 seconds