• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 203
  • 65
  • 26
  • 26
  • 16
  • 11
  • 11
  • 10
  • 10
  • 6
  • 4
  • 4
  • 3
  • 2
  • 2
  • Tagged with
  • 465
  • 63
  • 56
  • 56
  • 55
  • 48
  • 45
  • 43
  • 41
  • 40
  • 38
  • 37
  • 35
  • 33
  • 33
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
371

Estudo comparativo de métodos geoestatísticos de estimativas e simulações estocásticas condicionais / Comparative study of geostatistical estimation methods and conditional stochastic simulations

Furuie, Rafael de Aguiar 05 October 2009 (has links)
Diferentes métodos geoestatísticos são apresentados como a melhor solução para diferentes contextos de acordo com a natureza dos dados a serem analisados. Alguns dos métodos de estimativa mais populares incluem a krigagem ordinária e a krigagem ordinária lognormal, esta ultima requerendo a transformação dos dados originais para uma distribuição gaussiana. No entanto, esses métodos apresentam limitações, sendo uma das mais discutidas o efeito de suavização apresentado pelas estimativas obtidas. Alguns algoritmos recentes foram propostos como meios de se corrigir este efeito, e são avaliados neste trabalho para a sua eficiência, assim como alguns algoritmos para a transformada reversa dos valores convertidos na krigagem ordinária lognormal. Outra abordagem para o problema é por meio do grupo de métodos denominado de simulação estocástica, alguns dos mais populares sendo a simulação gaussiana seqüencial e a simulação por bandas rotativas, que apesar de não apresentar o efeito de suavização da krigagem, não possuem a precisão local característica dos métodos de estimativa. Este trabalho busca avaliar a eficiência dos diferentes métodos de estimativa (krigagem ordinária, krigagem ordinária lognormal, assim como suas estimativas corrigidas) e simulação (simulação seqüencial gaussiana e simulação por bandas rotativas) para diferentes cenários de dados. Vinte e sete conjuntos de dados exaustivos (em grid 50x50) foram amostrados em 90 pontos por meio da amostragem aleatória simples. Estes conjuntos de dados partiam de uma distribuição gaussiana (Log1) e tinham seus coeficientes de variação progressivamente aumentados até se chegar a uma distribuição altamente assimétrica (Log27). Semivariogramas amostrais foram computados e modelados para os processos geoestatísticos de estimativa e simulação. As estimativas ou realizações resultantes foram então comparadas com os dados exaustivos originais de maneira a se avaliar quão bem esses dados originais eram reproduzidos. Isto foi feito pela comparação de parâmetros estatísticos dos dados originais com os dos dados reconstruídos, assim como por meio de análise gráfica. Resultados demonstraram que o método que apresentou melhores resultados foi a krigagem ordinária lognormal, estes ainda melhores quando aplicada a transformação reversa de Yamamoto, com grande melhora principalmente nos resultados para os dados altamente assimétricos. A krigagem ordinária apresentou sérias limitações na reprodução da cauda inferior dos conjuntos de dados mais assimétricos, apresentando para estes resultados piores que as estimativas não corrigidas. Ambos os métodos de simulação utilizados apresentaram uma baixa correlação como os dados exaustivos, seus resultados também cada vez menos representativos de acordo com o aumento do coeficiente de variação, apesar de apresentar a vantagem de fornecer diferentes cenários para tomada de decisões. / Different geostatistical methods present themselves as the optimal solution to different realities according to the characteristics displayed by the data in analysis. Some of the most popular estimation methods include ordinary kriging and lognormal ordinary kriging, this last one involving the transformation of data from their original space to a Gaussian distribution. However, these methods present some limitations, one of the most prominent ones being the smoothing effect observed in the resulting estimates. Some recent algorithms have been proposed as a way to correct this effect, and are tested in this work for their effectiveness, as well as some methods for the backtransformation of the lognormal converted values. Another approach to the problem is by means of the group of methods known as stochastic simulation, some of the most popular ones being the sequential Gaussian simulation and turning bands simulation, which although do not present the smoothing effect, lack the local accuracy characteristic of the estimation methods. This work seeks to assess the effectiveness of the different estimation (ordinary kriging, lognormal ordinary kriging, and their corrected estimates) and simulation (sequential Gaussian simulation and turning bands simulation) methods for different scenarios. Twenty seven exhaustive data sets (in a 50x50 grid) have been sampled at 90 points based on simple random sampling. These data sets started from a Gaussian distribution (Log1) and had their variation coefficients increased progressively, up to a highly asymmetrical distribution (Log27). Experimental semivariograms have been computed and modeled for geostatistical estimation and simulation processes. The resulting estimates or realizations were then compared to the original exhaustive data in order to assess how well these reproduced the original data. This was done by comparing statistical parameters of the original data and the ones of the reconstructed data, as well as graphically. Results showed that the method that presented the best correlation with the exhaustive data was lognormal ordinary kriging, even better when the backtransformation technique by Yamamoto is applied, which much improved the results for the more asymmetrical data sets. Ordinary kriging and its correction had some severe limitations in reproducing the lower tail of the more asymmetrical data sets, with worst results than those for the uncorrected estimates. Both simulation methods used presented a very small degree of correlation to the exhaustive data, their results also progressively less representative as the variation coefficient grew, even though it has the advantage of presenting several scenarios for decision making.
372

Estudo comparativo de métodos geoestatísticos de estimativas e simulações estocásticas condicionais / Comparative study of geostatistical estimation methods and conditional stochastic simulations

Rafael de Aguiar Furuie 05 October 2009 (has links)
Diferentes métodos geoestatísticos são apresentados como a melhor solução para diferentes contextos de acordo com a natureza dos dados a serem analisados. Alguns dos métodos de estimativa mais populares incluem a krigagem ordinária e a krigagem ordinária lognormal, esta ultima requerendo a transformação dos dados originais para uma distribuição gaussiana. No entanto, esses métodos apresentam limitações, sendo uma das mais discutidas o efeito de suavização apresentado pelas estimativas obtidas. Alguns algoritmos recentes foram propostos como meios de se corrigir este efeito, e são avaliados neste trabalho para a sua eficiência, assim como alguns algoritmos para a transformada reversa dos valores convertidos na krigagem ordinária lognormal. Outra abordagem para o problema é por meio do grupo de métodos denominado de simulação estocástica, alguns dos mais populares sendo a simulação gaussiana seqüencial e a simulação por bandas rotativas, que apesar de não apresentar o efeito de suavização da krigagem, não possuem a precisão local característica dos métodos de estimativa. Este trabalho busca avaliar a eficiência dos diferentes métodos de estimativa (krigagem ordinária, krigagem ordinária lognormal, assim como suas estimativas corrigidas) e simulação (simulação seqüencial gaussiana e simulação por bandas rotativas) para diferentes cenários de dados. Vinte e sete conjuntos de dados exaustivos (em grid 50x50) foram amostrados em 90 pontos por meio da amostragem aleatória simples. Estes conjuntos de dados partiam de uma distribuição gaussiana (Log1) e tinham seus coeficientes de variação progressivamente aumentados até se chegar a uma distribuição altamente assimétrica (Log27). Semivariogramas amostrais foram computados e modelados para os processos geoestatísticos de estimativa e simulação. As estimativas ou realizações resultantes foram então comparadas com os dados exaustivos originais de maneira a se avaliar quão bem esses dados originais eram reproduzidos. Isto foi feito pela comparação de parâmetros estatísticos dos dados originais com os dos dados reconstruídos, assim como por meio de análise gráfica. Resultados demonstraram que o método que apresentou melhores resultados foi a krigagem ordinária lognormal, estes ainda melhores quando aplicada a transformação reversa de Yamamoto, com grande melhora principalmente nos resultados para os dados altamente assimétricos. A krigagem ordinária apresentou sérias limitações na reprodução da cauda inferior dos conjuntos de dados mais assimétricos, apresentando para estes resultados piores que as estimativas não corrigidas. Ambos os métodos de simulação utilizados apresentaram uma baixa correlação como os dados exaustivos, seus resultados também cada vez menos representativos de acordo com o aumento do coeficiente de variação, apesar de apresentar a vantagem de fornecer diferentes cenários para tomada de decisões. / Different geostatistical methods present themselves as the optimal solution to different realities according to the characteristics displayed by the data in analysis. Some of the most popular estimation methods include ordinary kriging and lognormal ordinary kriging, this last one involving the transformation of data from their original space to a Gaussian distribution. However, these methods present some limitations, one of the most prominent ones being the smoothing effect observed in the resulting estimates. Some recent algorithms have been proposed as a way to correct this effect, and are tested in this work for their effectiveness, as well as some methods for the backtransformation of the lognormal converted values. Another approach to the problem is by means of the group of methods known as stochastic simulation, some of the most popular ones being the sequential Gaussian simulation and turning bands simulation, which although do not present the smoothing effect, lack the local accuracy characteristic of the estimation methods. This work seeks to assess the effectiveness of the different estimation (ordinary kriging, lognormal ordinary kriging, and their corrected estimates) and simulation (sequential Gaussian simulation and turning bands simulation) methods for different scenarios. Twenty seven exhaustive data sets (in a 50x50 grid) have been sampled at 90 points based on simple random sampling. These data sets started from a Gaussian distribution (Log1) and had their variation coefficients increased progressively, up to a highly asymmetrical distribution (Log27). Experimental semivariograms have been computed and modeled for geostatistical estimation and simulation processes. The resulting estimates or realizations were then compared to the original exhaustive data in order to assess how well these reproduced the original data. This was done by comparing statistical parameters of the original data and the ones of the reconstructed data, as well as graphically. Results showed that the method that presented the best correlation with the exhaustive data was lognormal ordinary kriging, even better when the backtransformation technique by Yamamoto is applied, which much improved the results for the more asymmetrical data sets. Ordinary kriging and its correction had some severe limitations in reproducing the lower tail of the more asymmetrical data sets, with worst results than those for the uncorrected estimates. Both simulation methods used presented a very small degree of correlation to the exhaustive data, their results also progressively less representative as the variation coefficient grew, even though it has the advantage of presenting several scenarios for decision making.
373

Choix optimal du paramètre de lissage dans l'estimation non paramétrique de la fonction de densité pour des processus stationnaires à temps continu / Optimal choice of smoothing parameter in non parametric density estimation for continuous time stationary processes

El Heda, Khadijetou 25 October 2018 (has links)
Les travaux de cette thèse portent sur le choix du paramètre de lissage dans le problème de l'estimation non paramétrique de la fonction de densité associée à des processus stationnaires ergodiques à temps continus. La précision de cette estimation dépend du choix de ce paramètre. La motivation essentielle est de construire une procédure de sélection automatique de la fenêtre et d'établir des propriétés asymptotiques de cette dernière en considérant un cadre de dépendance des données assez général qui puisse être facilement utilisé en pratique. Cette contribution se compose de trois parties. La première partie est consacrée à l'état de l'art relatif à la problématique qui situe bien notre contribution dans la littérature. Dans la deuxième partie, nous construisons une méthode de sélection automatique du paramètre de lissage liée à l'estimation de la densité par la méthode du noyau. Ce choix issu de la méthode de la validation croisée est asymptotiquement optimal. Dans la troisième partie, nous établissons des propriétés asymptotiques, de la fenêtre issue de la méthode de la validation croisée, données par des résultats de convergence presque sûre. / The work this thesis focuses on the choice of the smoothing parameter in the context of non-parametric estimation of the density function for stationary ergodic continuous time processes. The accuracy of the estimation depends greatly on the choice of this parameter. The main goal of this work is to build an automatic window selection procedure and establish asymptotic properties while considering a general dependency framework that can be easily used in practice. The manuscript is divided into three parts. The first part reviews the literature on the subject, set the state of the art and discusses our contribution in within. In the second part, we design an automatical method for selecting the smoothing parameter when the density is estimated by the Kernel method. This choice stemming from the cross-validation method is asymptotically optimal. In the third part, we establish an asymptotic properties pertaining to consistency with rate for the resulting estimate of the window-width.
374

Modelos lineares parciais aditivos generalizados com suavização por meio de P-splines / Generalized additive partial linear models with P-splines smoothing

Holanda, Amanda Amorim 03 May 2018 (has links)
Neste trabalho apresentamos os modelos lineares parciais generalizados com uma variável explicativa contínua tratada de forma não paramétrica e os modelos lineares parciais aditivos generalizados com no mínimo duas variáveis explicativas contínuas tratadas de tal forma. São utilizados os P-splines para descrever a relação da variável resposta com as variáveis explicativas contínuas. Sendo assim, as funções de verossimilhança penalizadas, as funções escore penalizadas e as matrizes de informação de Fisher penalizadas são desenvolvidas para a obtenção das estimativas de máxima verossimilhança penalizadas por meio da combinação do algoritmo backfitting (Gauss-Seidel) e do processo iterativo escore de Fisher para os dois tipos de modelo. Em seguida, são apresentados procedimentos para a estimação do parâmetro de suavização, bem como dos graus de liberdade efetivos. Por fim, com o objetivo de ilustração, os modelos propostos são ajustados à conjuntos de dados reais. / In this work we present the generalized partial linear models with one continuous explanatory variable treated nonparametrically and the generalized additive partial linear models with at least two continuous explanatory variables treated in such a way. The P-splines are used to describe the relationship among the response and the continuous explanatory variables. Then, the penalized likelihood functions, penalized score functions and penalized Fisher information matrices are derived to obtain the penalized maximum likelihood estimators by the combination of the backfitting (Gauss-Seidel) algorithm and the Fisher escoring iterative method for the two types of model. In addition, we present ways to estimate the smoothing parameter as well as the effective degrees of freedom. Finally, for the purpose of illustration, the proposed models are fitted to real data sets.
375

Étude des propriétés statistiques d'une tache focale laser lissée et de leur influence sur la rétrodiffusion brillouin stimulée / Studies of the statistical properties of a smoothed laser focal spot and their influence on stimulated Brillouin backscattering

Duluc, Maxime 15 July 2019 (has links)
Dans le contexte de la fusion par confinement inertiel (FCI), le lissage optique est une technique utilisée pour obtenir une irradiation laser aussi homogène que possible, en modifiant les propriétés de cohérence temporelle et spatiale des faisceaux laser. L'utilisation du lissage optique est une nécessité sur les lasers de puissance comme le Laser MégaJoule (LMJ) pour limiter le développement des instabilités paramétriques issues de l'intéraction laser-plasma, et parmi elles, la rétrodiffusion Brillouin stimulée (RBS). Ces instabilités entraînent des défauts d'irradiation sur cible et peuvent aussi être une source d'endommagement dans la chaîne optique. Cependant ces techniques peuvent entraîner d'autres problèmes au niveau de la chaîne laser, tels que la conversion de modulation de phase en modulation d'amplitude (FM-AM), néfastes au bon déroulement des expériences et pouvant également endommager les chaînes laser.On comprend donc qu'il est nécessaire de trouver un compromis autour du lissage optique. L’évolution du compromis du lissage est cependant compliquée car la quantification des gains et des pertes est très difficile à établir. Ainsi, tant que la quantification n’est pas faite, le compromis n’évolue pas : le lasériste souhaite toujours moins de lissage et « l’expérimentateur » toujours plus de lissage mais aucun des deux ne peut apporter suffisamment d’éléments quantitatifs pour faire pencher la balance. Cette thèse propose donc de poser les premières briques permettant d'arriver à ce compromis pour le LMJ, à l'aide d'études théoriques et numériques.Nous comparons soigneusement le lissage longitudinal (LSSD) et transversal (TSSD) par dispersion spectrale dans une configuration de lissage idéale pour chaque cas. Avec des codes 3D, nous avons simulé la RBS dans un plasma d'or, typique des expériences de FCI et favorable au développement de la RBS. Nous montrons que, contrairement aux idées reçues, l'évolution temporelle de la RBS présente certaines différences entre les deux systèmes de lissage. Premièrement, les valeurs asymptotiques des niveaux de saturation ne sont pas tout à fait les mêmes. Avec une simple description des rayons et le calcul du gain RBS pour chaque rayon, nous avons pu expliquer cette différence. En outre, la dynamique de la RBS est également quelque peu différente. Nous avons montré que la dynamique RBS est déterminée par l'évolution temporelle des propriétés des surintensités et en particulier par la longueur d'interaction effective entre la lumière rétrodiffusée Brillouin et les points chauds. Cette longueur d'interaction effective dépend à la fois de la vitesse longitudinale et de la longueur des points chauds. En effet, la synchronisation des longueurs d'interaction effectives des deux schémas de lissage synchronise également la croissance des courbes de rétrodiffusion avant saturation.Nous montrons, également qu'il est possible de faire évoluer les paramètres de lissage du LMJ en illustrant une nouvelle façon de réduire la conversion FM-AM inévitablement présente dans les lasers de forte puissance. En répartissant le spectre total habituellement utilisé par un quadruplet (regroupement de 4 faisceaux), en deux parties de spectres identiques plus petits sur les faisceaux de gauche et de droite, la conversion FM en AM est considérablement réduite de 30% à 5% tout en maintenant la performance de lissage pour la RBS. Nous avons également montré que le temps de cohérence qui en résulte n'a aucun effet sur le niveau maximal de RBS atteint. De la même façon, il faudra étudier l'impact de ces évolutions sur d'autres instabilités telles que le diffusion Raman stimulée ou le transfert d'énergie par croisement de faisceaux. / In the context of inertial confinement fusion (ICF), optical smoothing is a technique used to obtain the most homogeneous laser irradiation possible, by modifying the temporal and spatial coherence properties of the laser beams. The use of optical smoothing is a necessity on high-power lasers such as the Laser Mégajoule (LMJ) to limit the development of parametric instabilities resulting from laser-plasma interaction, and among them, stimulated Brillouin backscattering (SBS). These instabilities lead to target irradiation defects and can also be a source of damage in the optical lines. However, these techniques can lead to other problems in the laser lines, such as the conversion of phase modulation to amplitude modulation (FM-to-AM), which is harmful to the proper conduct of the experiments and can also damage the laser optics.It is therefore a necessity to find a compromise around optical smoothing. The evolution of the smoothing compromise is however complicated because the quantification of gains and losses is very difficult to establish. Thus, as long as quantification is not done, the compromise does not evolve: the laserist always wants less smoothing and the experimentalist always more smoothing, but neither of them can bring enough quantitative elements to tip the balance. This thesis therefore proposes to lay the first groundwork for reaching this compromise for the LMJ, using theoretical and numerical studies.We carefully compare longitudinal (LSSD) and transverse (TSSD) smoothing by spectral dispersion in an ideal smoothing configuration for each case. With 3D codes, we simulated SBS in a gold plasma, typical of ICF experiments and favourable to the development of SBS. We show that, contrary to popular belief, the temporal evolution of SBS shows some differences between the two smoothing schemes. First, the asymptotic values of saturation levels are not quite the same. With a simple description using light rays and the calculation of the SBS gain for each ray, we were able to explain this difference. In addition, the dynamics of SBS are also somewhat different. We have shown that the SBS dynamics is determined by the temporal evolution of the properties of the hot-spots and in particular by the effective interaction length between the Brillouin backscattered light and the hot-spots. This effective interaction length depends on both the longitudinal velocity and the length of the hot-spots. Indeed, the synchronization of the effective interaction lengths of the two smoothing schemes also synchronizes the growth of the backscatter curves before saturation.We also show that it is possible to change the smoothing parameters of the LMJ by illustrating a new way to reduce the FM-to-AM conversion inevitably present in high-power lasers. By splitting the total spectrum usually used by a quadruplet (grouping of 4 beams) into two parts of smaller identical spectra on the left and right beams, the FM-to-AM conversion is significantly reduced from 30% to 5% while maintaining the smoothing performance for SBS. We have also shown that the resulting coherence time of the laser has no effect on the maximum level of SBS achieved. Similarly, the impact of these developments on other instabilities such as stimulated Raman scattering or crossed beam energy transfer will also need to be investigated.
376

可加性模型保險之應用:壽險保費收入與總體經濟指標美、日、中、英、德之模型比較 / An Application of Insurance in Additive Model:United States's, Japan's,Taiwan's,England's and germnany's Life Insurance Model between Premiums and Macro-variables comparison.

許光宏, Ellit G. Sheu Unknown Date (has links)
在線性模型中以計算容易,解釋方便為著稱,但是比須加入許多嚴格限制 ,而對於事後之模型檢測亦要花費番功夫。,而可加性模型只要函數給定 ,backfitting 演算法收歛即可。可加性模型除了保留線性模型的加法性 及解釋能力外,尚且提高了估計準度。在美、日、中、英、德五個國家的 保險市場中,雖然判定係數的提升亦大有斬獲 (0.85->0.9957),然而在 台灣我們根據實證 一、提升統計應用水準,大幅提高模型變數的解釋能 力,模型內MSE(Me Square Error)大幅降低。(見表5-1、表5-2、表5-3、 表5-4、表5-5、表5-6、二、維持了線性模型方便的解釋能力。三、提升 估計水準,用以比較二種模型之優劣時,採1991年保費收入之實際值與估 計值之比較(見表 5-3,表 5-6,表 5-9,表 5-12,表 5- 15),可發現 線性模型誤差率與可加性模型誤差率的比值美國為2倍、日本為12倍、臺 灣為4.55倍、英國為2.95倍、德國為2.95倍。四、函數以圖形方式表示顯 而易見。可加性模型所做的保費收入估計模型 / An Application of Insurance in Additive Model:United States's, Japan's,Taiwan's,England's and germnany's Life Insurance Model between Premiums and Macro-variables comparison.
377

修勻與小區域人口之研究 / A Study of smoothing methods for small area population

金碩, Jin, Shuoh Unknown Date (has links)
由於誤差與人口數成反比,資料多寡影響統計分析的穩定性及可靠性,因此常用於推估大區域人口的方法,往往無法直接套用至縣市及其以下層級,尤其當小區域內部地理、社會或經濟的異質性偏高時,人口推估將更為棘手。本文以兩個面向對臺灣小區域人口進行探討:其一、臺灣人口結構漸趨老化,勢必牽動政府政策與資源分配,且臺灣各縣市的人口老化速度不一,有必要針對各地特性發展適當的小區域人口推估方法;其二、因為壽命延長,全球皆面臨長壽風險(Longevity Risk)的挑戰,包括政府退休金制度規劃、壽險保費釐定等,由於臺灣各地死亡率變化不盡相同,發展小區域死亡率模型也是迫切課題。 小區域推估面臨的問題大致可歸納為四個方向:「資料品質」、「地區人數」、「資料年數」與「推估年數」,資料品質有賴資料庫與制度的建立,關於後三個問題,本文引進修勻(Smoothing, Graduation)等方法來提高小區域推估及小區域死亡模型的穩定性。人口推估方面結合修勻與區塊拔靴法(Block Bootstrap),死亡率模型的建構則將修勻加入Lee-Carter與Age-Period-Cohort模型。由於小區域人口數較少,本文透過標準死亡比(Standard Mortality Ratio)及大區域與小區域間的連貫(Coherence),將大區域的訊息加入小區域,降低因為地區人數較少引起的震盪。 小區域推估通常可用的資料時間較短,未來推估結果的震盪也較大,本文針對需要過去幾年資料,以及未來可推估年數等因素進行研究,希冀結果可提供臺灣各地方政府的推估參考。研究發現,參考大區域訊息有穩定推估的效果,修勻有助於降低推估誤差;另外,在小區域推估中,如有過去十五年資料可獲得較可靠的推估結果,而未來推估年數盡量不超過二十年,若地區人數過少則建議合併其他區域增加資料量後再行推估;先經過修勻而得出的死亡率模型,其效果和較為複雜的連貫模型修正相當。 / The population size plays a very important role in statistical estimation, and it is difficult to derive a reliable estimation for small areas. The estimation is even more difficult if the geographic and social attributes within the small areas vary widely. However, although the population aging and longevity risk are common phenomenon in the world, the problem is not the same for different countries. The aim of this study is to explore the population projection and mortality models for small areas, with the consideration of the small area’s distinguishing characteristic. The difficulties for small area population projection can be attributed into four directions: data quality, population size, number of base years, and projection horizon. The data quality is beyond the discussion of this study and the main focus shall be laid on the other three issues. The smoothing methods and coherent models will be applied to improve the stability and accuracy of small area estimation. In the study, the block bootstrap and the smoothing methods are combined to project the population to the small areas in Taiwan. Besides, the Lee-Cater and the age-period-cohort model are extended by the smoothing and coherent methods. We found that the smoothing methods can reduce the fluctuation of estimation and projection in general, and the improvement is especially noticeable for areas with smaller population sizes. To obtain a reliable population projection for small areas, we suggest using at least fifteen-year of historical data for projection and a projection horizon not more than twenty years. Also, for developing mortality models for small areas, we found that the smoothing methods have similar effects than those methods using more complicated models, such as the coherent models.
378

Contours actifs paramétriques pour la segmentation<br />d'images et vidéos

Precioso, Frédéric 24 September 2004 (has links) (PDF)
Cette thèse s'inscrit dans le cadre des modèles de contours actifs. Il s'agit de méthodes dynamiquesappliquées à la segmentation d'image, en image fixe et vidéo. L'image est représentée par desdescripteurs régions et/ou contours. La segmentation est traitée comme un problème deminimisationd'une fonctionnelle. La recherche du minimum se fait via la propagation d'un contour actif dit basérégions. L'efficacité de ces méthodes réside surtout dans leur robustesse et leur rapidité. L'objectifde cette thèse est triple : le développement (i) d'une représentation paramétrique de courbes respectantcertaines contraintes de régularités, (ii) les conditions nécessaires à une évolution stable de cescourbes et (iii) la réduction des coûts calcul afin de proposer une méthode adaptée aux applicationsnécessitant une réponse en temps réel.Nous nous intéressons principalement aux contraintes de rigidité autorisant une plus granderobustesse vis-à-vis du bruit. Concernant l'évolution des contours actifs, nous étudions les problèmesd'application de la force de propagation, de la gestion de la topologie et des conditionsde convergence. Nous avons fait le choix des courbes splines cubiques. Cette famille de courbesoffre d'intéressantes propriétés de régularité, autorise le calcul exact des grandeurs différentiellesqui interviennent dans la fonctionnelle et réduit considérablement le volume de données à traiter.En outre, nous avons étendu le modèle classique des splines d'interpolation à un modèle de splinesd'approximation, dites smoothing splines. Ce dernier met en balance la contrainte de régularité etl'erreur d'interpolation sur les points d'échantillonnage du contour. Cette flexibilité permet ainsi deprivilégier la précision ou la robustesse.L'implémentation de ces modèles de splines a prouvé son efficacité dans diverses applicationsde segmentation.
379

Fenchel duality-based algorithms for convex optimization problems with applications in machine learning and image restoration

Heinrich, André 27 March 2013 (has links) (PDF)
The main contribution of this thesis is the concept of Fenchel duality with a focus on its application in the field of machine learning problems and image restoration tasks. We formulate a general optimization problem for modeling support vector machine tasks and assign a Fenchel dual problem to it, prove weak and strong duality statements as well as necessary and sufficient optimality conditions for that primal-dual pair. In addition, several special instances of the general optimization problem are derived for different choices of loss functions for both the regression and the classifification task. The convenience of these approaches is demonstrated by numerically solving several problems. We formulate a general nonsmooth optimization problem and assign a Fenchel dual problem to it. It is shown that the optimal objective values of the primal and the dual one coincide and that the primal problem has an optimal solution under certain assumptions. The dual problem turns out to be nonsmooth in general and therefore a regularization is performed twice to obtain an approximate dual problem that can be solved efficiently via a fast gradient algorithm. We show how an approximate optimal and feasible primal solution can be constructed by means of some sequences of proximal points closely related to the dual iterates. Furthermore, we show that the solution will indeed converge to the optimal solution of the primal for arbitrarily small accuracy. Finally, the support vector regression task is obtained to arise as a particular case of the general optimization problem and the theory is specialized to this problem. We calculate several proximal points occurring when using difffferent loss functions as well as for some regularization problems applied in image restoration tasks. Numerical experiments illustrate the applicability of our approach for these types of problems.
380

Erhöhung der Qualität und Verfügbarkeit von satellitengestützter Referenzsensorik durch Smoothing im Postprocessing

Bauer, Stefan 02 February 2013 (has links) (PDF)
In dieser Arbeit werden Postprocessing-Verfahren zum Steigern der Genauigkeit und Verfügbarkeit satellitengestützer Positionierungsverfahren, die ohne Inertialsensorik auskommen, untersucht. Ziel ist es, auch unter schwierigen Empfangsbedingungen, wie sie in urbanen Gebieten herrschen, eine Trajektorie zu erzeugen, deren Genauigkeit sie als Referenz für andere Verfahren qualifiziert. Zwei Ansätze werdenverfolgt: Die Verwendung von IGS-Daten sowie das Smoothing unter Einbeziehung von Sensoren aus der Fahrzeugodometrie. Es wird gezeigt, dass durch die Verwendung von IGS-Daten eine Verringerung des Fehlers um 50% bis 70% erreicht werden kann. Weiterhin demonstrierten die Smoothing-Verfahren, dass sie in der Lage sind, auch unter schlechten Empfangsbedingungen immer eine Genauigkeit im Dezimeterbereich zu erzielen.

Page generated in 0.0652 seconds