Spelling suggestions: "subject:"transforms"" "subject:"ztransforms""
201 |
金融整合後壽險公司組織轉型之研究 / After financial conformity, the life insurance company organizes research of the reforming林奕明 Unknown Date (has links)
近年來在金融整合的趨勢下,國內外金融業透過購併整合成為大型的金控集團,使得金融產業結構大為改變。在整合的架構下發展出新的金融商品及市場。由於金控公司將銀行與證券商、保險業結盟合併,金控子公司可以進行交叉銷售(cross selling),使金融商品多元化,提供給客戶包含保險、股票、信用卡、基金、債券等金融商品,這些多元化的金融商品與服務,金控公司亦可藉著交叉銷售擴大市場占有率及降低經營成本,進而強化整體獲利。
本研究以國泰金控和富邦金控的壽險子公司轉型為研究案例,進行個案分析探討,以了解個案公司組織轉型的原因、組織調整方式及轉型後的成效。本研究利用近十年的統計資料進行分析,研究產業的變化及觀察個案公司的市佔率及業務狀況。研究結果發現:1.壽險公司轉型成為提供多元服務的金融控股公司2.壽險業業務人員轉型成為全方位理財服務人員3.金融業相互整合,行銷通路多元化4.科技及產業變化創造出新通路新市場5.企業成長往多角化發展並兼顧核心競爭能力
|
202 |
Medical Image Processing on the GPU : Past, Present and FutureEklund, Anders, Dufort, Paul, Forsberg, Daniel, LaConte, Stephen January 2013 (has links)
Graphics processing units (GPUs) are used today in a wide range of applications, mainly because they can dramatically accelerate parallel computing, are affordable and energy efficient. In the field of medical imaging, GPUs are in some cases crucial for enabling practical use of computationally demanding algorithms. This review presents the past and present work on GPU accelerated medical image processing, and is meant to serve as an overview and introduction to existing GPU implementations. The review covers GPU acceleration of basic image processing operations (filtering, interpolation, histogram estimation and distance transforms), the most commonly used algorithms in medical imaging (image registration, image segmentation and image denoising) and algorithms that are specific to individual modalities (CT, PET, SPECT, MRI, fMRI, DTI, ultrasound, optical imaging and microscopy). The review ends by highlighting some future possibilities and challenges.
|
203 |
Segmentação da estrutura cerebral hipocampo por meio de nuvem de similaridade / Automatic hippocampus segmentation through similarity cloudFredy Edgar Carranza Athó 03 August 2011 (has links)
O hipocampo é uma estrutura cerebral que possui importância primordial para o sistema de memória humana. Alterações no seus tecidos levam a doenças neurodegenerativas, tais como: epilepsia, esclerose múltipla e demência, entre outras. Para medir a atrofia do hipocampo é necessário isolá-lo do restante do cérebro. A separação do hipocampo das demais partes do cérebro ajuda aos especialistas na análise e o entendimento da redução de seu volume e detecção de qualquer anomalia presente. A extração do hipocampo é principalmente realizada de modo manual, a qual é demorada, pois depende da interação do usuário. A segmentação automática do hipocampo é investigada como uma alternativa para contornar tais limitações. Esta dissertação de mestrado apresenta um novo método de segmentação automático, denominado Modelo de Nuvem de Similaridade (Similarity Cloud Model - SimCM). O processo de segmentação é dividido em duas etapas principais: i) localização por similaridade e ii) ajuste de nuvem. A primeira operação utiliza a nuvem para localizar a posição mais provável do hipocampo no volume destino. A segunda etapa utiliza a nuvem para corrigir o delineamento final baseada em um novo método de cálculo de readequação dos pesos das arestas. Nosso método foi testado em um conjunto de 235 MRI combinando imagens de controle e de pacientes com epilepsia. Os resultados alcançados indicam um rendimento superior tanto em efetividade (qualidade da segmentação) e eficiência (tempo de processamento), comparado com modelos baseados em grafos e com modelos Bayesianos. Como trabalho futuro, pretendemos utilizar seleção de características para melhorar a construção da nuvem e o delineamento dos tecidos / The hippocampus is a particular structure that plays a main role in human memory systems. Tissue modifications of the hippocampus lead to neurodegenerative diseases as epilepsy, multiple sclerosis, and dementia, among others. To measure hippocampus atrophy, it is crucial to get its isolated representation from the whole brain volume. Separating the hippocampus from the brain helps physicians in better analyzing and understanding its volume reduction, and detecting any abnormal behavior. The extraction of the hippocampus is dominated by manual segmentation, which is time consuming mainly because it depends on user interaction. Therefore, automatic segmentation of the hippocampus has being investigated as an alternative solution to overcome such limitations. This master dissertation presents a new automatic segmentation method called Similarity Cloud Model (SimCM) based on hippocampus feature extraction. The segmentation process consists of two main operations: i) localization by similarity, and ii) cloud adjustment. The first operation uses the cloud to localize the most probable position of the hippocampus in a target volume. The second process invokes the cloud to correct the final labeling, based on a new method for arc-weight re-adjustment. Our method has been tested in a dataset of 235 MRIs combining healthy and epileptic patients. Results indicate superior performance, in terms of effectiveness (segmentation quality) and efficiency (processing time), in comparison with similar graph-based and Bayesian-based models. As future work, we intend to use feature selection to improve cloud construction and tissue delineation
|
204 |
Uma contribuição à análise espectral de sinais estacionários e não estacionáriosMenezes, Alam Silva 01 September 2014 (has links)
Submitted by Renata Lopes (renatasil82@gmail.com) on 2016-02-16T09:52:46Z
No. of bitstreams: 1
alamsilvamenezes.pdf: 8301590 bytes, checksum: aed618e30f38206da4bf4f329924f87e (MD5) / Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2016-02-26T12:30:53Z (GMT) No. of bitstreams: 1
alamsilvamenezes.pdf: 8301590 bytes, checksum: aed618e30f38206da4bf4f329924f87e (MD5) / Made available in DSpace on 2016-02-26T12:30:53Z (GMT). No. of bitstreams: 1
alamsilvamenezes.pdf: 8301590 bytes, checksum: aed618e30f38206da4bf4f329924f87e (MD5)
Previous issue date: 2014-09-01 / A presente tese propõe soluções ao problema da explicitação do conteúdo espectral de
processos estacionários e não estacionários, com aplicações na estimação de frequência,
estimação da densidade espectral de potência e no monitoramento do espectro. A técnica
de estimação de frequência proposta nesta tese, baseada na warped discrete Fourier
transform, apresenta, de acordo com as simulações computacionais, o melhor desempenho
frente às demais técnicas comparadas, atingindo o Cramer-Rao bound para uma ampla
faixa de relação sinal ruído. Em relação a estimação da densidade espectral de potência,
a Hartley Multitaper method, proposta nesta tese, apresenta desempenho similar à
multitaper method, em termos da variância de estimação e da polarização do espectro,
mas simpli cação de implementação. Uma técnica para monitoramento do espectro para
sistemas power line communication é proposta, levando em consideração o conceito de
quanta e a diversidade observada quando os sinais são aquisitados a partir da rede de
energia elétrica e do ar. Baseando-se em sinais sintéticos, gerados em computador, assim
como dados de medição do espectro, obtidos utilizando uma antena e o cabo de energia
elétrica como elementos sensores, veri fica-se que o desempenho da técnica proposta supera
a monitoração padrão, sobretudo quando a diversidade gerada pelo cabo e pela antena
sobre o sinal monitorado é explorada na detecção. / This dissertation aims at discussing solutions to deal with spectral analysis of stationary
and non-stationary processes for frequency estimation, power spectral density estimation
and spectral monitoring applications. The frequency estimation techniques are assessed
through computer simulations. The proposed technique for frequency estimation is
based on warped discrete Fourier transform outperforms other techniques, achieving the
Cramer-Rao Bound for a wide range of signal to noise ratio. Regarding the power spectral
density estimation, the proposed Hartley Multitaper Method shows similar performance,
in terms of variance of estimates and polarization spectrum; however, it can simplify
the implementation complexity. The introduced spectrum sensing technique is based on
quanta de nition and the diversity o ered by the signals acquired from the electric power
grids and the air. Based on computer-generation data and those one obtained during a
measurement campaign, which one in this thesis is evaluated using synthetic signals, generated
by computer, as well as measurement data of the spectrum. The numerical results
show that the proposed technique outperforms a previous technique and can attain the
very detection ratio and the very low false alarm when the diversity yielded by electric
power grid and air is exploited.
|
205 |
Modélisation stochastique pour l’analyse d’images texturées : approches Bayésiennes pour la caractérisation dans le domaine des transforméesLasmar, Nour-Eddine 07 December 2012 (has links)
Le travail présenté dans cette thèse s’inscrit dans le cadre de la modélisation d’images texturées à l’aide des représentations multi-échelles et multi-orientations. Partant des résultats d’études en neurosciences assimilant le mécanisme de la perception humaine à un schéma sélectif spatio-fréquentiel, nous proposons de caractériser les images texturées par des modèles probabilistes associés aux coefficients des sous-bandes. Nos contributions dans ce contexte concernent dans un premier temps la proposition de différents modèles probabilistes permettant de prendre en compte le caractère leptokurtique ainsi que l’éventuelle asymétrie des distributions marginales associées à un contenu texturée. Premièrement, afin de modéliser analytiquement les statistiques marginales des sous-bandes, nous introduisons le modèle Gaussien généralisé asymétrique. Deuxièmement, nous proposons deux familles de modèles multivariés afin de prendre en compte les dépendances entre coefficients des sous-bandes. La première famille regroupe les processus à invariance sphérique pour laquelle nous montrons qu’il est pertinent d’associer une distribution caractéristique de type Weibull. Concernant la seconde famille, il s’agit des lois multivariées à copules. Après détermination de la copule caractérisant la structure de la dépendance adaptée à la texture, nous proposons une extension multivariée de la distribution Gaussienne généralisée asymétrique à l’aide de la copule Gaussienne. L’ensemble des modèles proposés est comparé quantitativement en terme de qualité d’ajustement à l’aide de tests statistiques d’adéquation dans un cadre univarié et multivarié. Enfin, une dernière partie de notre étude concerne la validation expérimentale des performances de nos modèles à travers une application de recherche d’images par le contenu textural. Pour ce faire, nous dérivons des expressions analytiques de métriques probabilistes mesurant la similarité entre les modèles introduits, ce qui constitue selon nous une troisième contribution de ce travail. Finalement, une étude comparative est menée visant à confronter les modèles probabilistes proposés à ceux de l’état de l’art. / In this thesis we study the statistical modeling of textured images using multi-scale and multi-orientation representations. Based on the results of studies in neuroscience assimilating the human perception mechanism to a selective spatial frequency scheme, we propose to characterize textures by probabilistic models of subband coefficients.Our contributions in this context consist firstly in the proposition of probabilistic models taking into account the leptokurtic nature and the asymmetry of the marginal distributions associated with a textured content. First, to model analytically the marginal statistics of subbands, we introduce the asymmetric generalized Gaussian model. Second, we propose two families of multivariate models to take into account the dependencies between subbands coefficients. The first family includes the spherically invariant processes that we characterize using Weibull distribution. The second family is this of copula based multivariate models. After determination of the copula characterizing the dependence structure adapted to the texture, we propose a multivariate extension of the asymmetric generalized Gaussian distribution using Gaussian copula. All proposed models are compared quantitatively using both univariate and multivariate statistical goodness of fit tests. Finally, the last part of our study concerns the experimental validation of the performance of proposed models through texture based image retrieval. To do this, we derive closed-form metrics measuring the similarity between probabilistic models introduced, which we believe is the third contribution of this work. A comparative study is conducted to compare the proposed probabilistic models to those of the state-of-the-art.
|
206 |
Etude de la bornitude des transformées de Riesz sur Lp via le Laplacien de Hodge-de Rham / Boundedness of the Riesz transforms on Lp via the Hodge-de Rham LaplacianMagniez, Jocelyn 06 November 2015 (has links)
Cette thèse comporte deux sujets d’étude mêlés. Le premier concerne l’étude de la bornitude sur Lp de la transformée de Riesz d∆-½ , où ∆ désigne l’opérateur de Laplace-Beltrami (positif). Le second traite de la régularité de Sobolev W1,p de la solution de l’équation de la chaleur non perturbée. Nous établissons également quelques résultats concernant les transformées de Riesz d’opérateurs de Schrödinger avec un potentiel comportant éventuellement une partie négative.Dans le cadre de ces travaux, nous nous plaçons sur une variété riemanienne (M, g) complète et non compacte. Nous supposons que M satisfait la propriété de doublement de volume (de constante de doublement égale à D) ainsi qu’une estimation gaussienne supérieure pour son noyau de la chaleur (celui associé à l’opérateur ∆). Nous travaillons avec le laplacien de Hodge-de Rham, noté ∆, agissant sur les 1-formes différentielles de M. En s’appuyant sur la formule de Bochner, liant ∆ à la courbure de Ricci de M, nous assimilons ∆ à un opérateur de Schrödinger à valeurs vectorielles. C’est un argument de dualité, basé sur une formule de commutation algébrique, qui lie l’étude de ∆ à celle de ∆. [...] / This thesis has two main parts. The first one deals with the study of the boundedness on Lp of the Riesz transform d∆-½ , where ∆ denotes the nonnegative Laplace-Beltrami operator. The second one deals with the Sobolev regularity W1,p of the solution of the heat equation. We also establish some results on the Riesz transforms of Schrödinger operators with a potential possibly having a negative part. In this work, we consider a complete non-compact Riemannian manifold (M, g). We assume that M satisfies the volume doubling property (with doubling constant equal to D) as well as a Gaussian upper estimate for its heat kernel associated to the operator ∆. We work with the Hodge-de Rham Laplacian ∆, acting on 1-differential forms of M. With the Bochner formula, linking ∆to the Ricci curvature of M, we see ∆ has a vector-valued Schrödinger operator. It is a duality argument, based on a commutation formula, which links the study of ∆to the one of ∆. [...]
|
207 |
Contribution à l'analyse et à la recherche d'information en texte intégral : application de la transformée en ondelettes pour la recherche et l'analyse de textes / Contribution in analysis and information retrieval in text : application of wavelets transforms in information retrievalSmail, Nabila 27 January 2009 (has links)
L’objet des systèmes de recherche d’informations est de faciliter l’accès à un ensemble de documents, afin de permettre à l’utilisateur de retrouver ceux qui sont pertinents, c'est-à-dire ceux dont le contenu correspond le mieux à son besoin en information. La qualité des résultats de la recherche se mesure en comparant les réponses du système avec les réponses idéales que l'utilisateur espère recevoir. Plus les réponses du système correspondent à celles que l'utilisateur espère, plus le système est jugé performant. Les premiers systèmes permettaient d’effectuer des recherches booléennes, c’est à dire, des recherches ou seule la présence ou l’absence d’un terme de la requête dans un texte permet de le sélectionner. Il a fallu attendre la fin des années 60, pour que l’on applique le modèle vectoriel aux problématiques de la recherche d’information. Dans ces deux modèles, seule la présence, l’absence, ou la fréquence des mots dans le texte est porteuse d’information. D’autres systèmes de recherche d’information adoptent cette approche dans la modélisation des données textuelles et dans le calcul de la similarité entre documents ou par rapport à une requête. SMART (System for the Mechanical Analysis and Retrieval of Text) [4] est l’un des premiers systèmes de recherche à avoir adopté cette approche. Plusieurs améliorations des systèmes de recherche d’information utilisent les relations sémantiques qui existent entre les termes dans un document. LSI (Latent Semantic Indexing) [5], par exemple réalise ceci à travers des méthodes d’analyse qui mesurent la cooccurrence entre deux termes dans un même contexte, tandis que Hearst et Morris [6] utilisent des thésaurus en ligne pour créer des liens sémantiques entre les termes dans un processus de chaines lexicales. Dans ces travaux nous développons un nouveau système de recherche qui permet de représenter les données textuelles par des signaux. Cette nouvelle forme de représentation nous permettra par la suite d’appliquer de nombreux outils mathématiques de la théorie du signal, tel que les Transformées en ondelettes et jusqu’a aujourd’hui inconnue dans le domaine de la recherche d’information textuelle / The object of information retrieval systems is to make easy the access to documents and to allow a user to find those that are appropriate. The quality of the results of research is measured by comparing the answers of the system with the ideal answers that the user hopes to find. The system is competitive when its answers correspond to those that the user hopes. The first retrieval systems performing Boolean researches, in other words, researches in which only the presence or the absence of a term of a request in a text allow choosing it. It was necessary to wait for the end of the sixties to apply the vector model in information retrieval. In these two models, alone presence, absence, or frequency of words in the text is holder of information. Several Information Retrieval Systems adopt a flat approach in the modeling of data and in the counting of similarity between documents or in comparison with a request. We call this approach ‘bag of words ’. These systems consider only presence, absence or frequency of appearance of terms in a document for the counting of its pertinence, while Hearst and Morris [6] uses online thesaurus to create semantic links between terms in a process of lexical chains. In this thesis we develop a new retrieval system which allows representing textual data by signals. This new form of presentation will allow us, later, to apply numerous mathematical tools from the theory of the signal such as Wavelets Transforms, well-unknown nowadays in the field of the textual information retrieval
|
208 |
Klasifikace spánkových EEG / Sleep scoring using EEGHoldova, Kamila January 2013 (has links)
This thesis deals with wavelet analysis of sleep electroencephalogram to sleep stages scoring. The theoretical part of the thesis deals with the theory of EEG signal creation and analysis. The polysomnography (PSG) is also described. This is the method for simultaneous measuring the different electrical signals; main of them are electroencephalogram (EEG), electromyogram (EMG) and electrooculogram (EOG). This method is used to diagnose sleep failure. Therefore sleep, sleep stages and sleep disorders are also described in the present study. In practical part, some results of application of discrete wavelet transform (DWT) for decomposing the sleep EEGs using mother wavelet Daubechies 2 „db2“ are shown and the level of the seven. The classification of the resulting data was used feedforward neural network with backpropagation errors.
|
209 |
Primes with a missing digit : distribution in arithmetic progressions and sieve-theoretic applicationsNath, Kunjakanan 07 1900 (has links)
Le thème de cette thèse est de comprendre la distribution des nombres premiers, qui est un sujet central de la théorie analytique des nombres. Plus précisément, nous allons prouver des théorèmes de type Bombieri-Vinogradov pour les nombres premiers avec un chiffre manquant dans leur développement b-adique pour un grand entier positif b. La preuve est basée sur la méthode du cercle, qui repose sur la structure de Fourier des entiers avec un chiffre manquant et les sommes exponentielles sur les nombres premiers dans les progressions arithmétiques. En combinant nos résultats avec le crible semi-linéaire, nous obtenons une borne supérieure et une borne inférieure avec le bon ordre de grandeur pour le nombre de nombres premiers de la forme p=1+m^2 + n^2 avec un chiffre manquant dans une grande base impaire b. / The theme of this thesis is to understand the distribution of prime numbers, which is a central topic in analytic number theory. More precisely, we prove Bombieri-Vinogradov type theorems for primes with a missing digit in their b-adic expansion for some large positive integer b. The proof is based on the circle method, which relies on the Fourier structure of the integers with a missing digit and the exponential sums over primes in arithmetic progressions. Combining our results with the semi-linear sieve, we obtain an upper bound and a lower bound of the correct order of magnitude for the number of primes of the form p=1+m^2+n^2 with a missing digit in a large odd base b.
|
210 |
Parametric Scattering NetworksGauthier, Shanel 04 1900 (has links)
La plupart des percées dans l'apprentissage profond et en particulier dans les réseaux de neurones convolutifs ont impliqué des efforts importants pour collecter et annoter des quantités massives de données. Alors que les mégadonnées deviennent de plus en plus répandues, il existe de nombreuses applications où la tâche d'annoter plus d'un petit nombre d'échantillons est irréalisable, ce qui a suscité un intérêt pour les tâches d'apprentissage sur petits échantillons.
Il a été montré que les transformées de diffusion d'ondelettes sont efficaces dans le cadre de données annotées limitées. La transformée de diffusion en ondelettes crée des invariants géométriques et une stabilité de déformation. Les filtres d'ondelettes utilisés dans la transformée de diffusion sont généralement sélectionnés pour créer une trame serrée via une ondelette mère paramétrée. Dans ce travail, nous étudions si cette construction standard est optimale. En nous concentrant sur les ondelettes de Morlet, nous proposons d'apprendre les échelles, les orientations et les rapports d'aspect des filtres. Nous appelons notre approche le Parametric Scattering Network. Nous illustrons que les filtres appris par le réseau de diffusion paramétrique peuvent être interprétés en fonction de la tâche spécifique sur laquelle ils ont été entrainés. Nous démontrons également empiriquement que notre transformée de diffusion paramétrique partage une stabilité aux déformations similaire à la transformée de diffusion traditionnelle. Enfin, nous montrons que notre version apprise de la transformée de diffusion génère des gains de performances significatifs par rapport à la transformée de diffusion standard lorsque le nombre d'échantillions d'entrainement est petit. Nos résultats empiriques suggèrent que les constructions traditionnelles des ondelettes ne sont pas toujours nécessaires. / Most breakthroughs in deep learning have required considerable effort to collect massive amounts of well-annotated data. As big data becomes more prevalent, there are many applications where annotating more than a small number of samples is impractical, leading to growing interest in small sample learning tasks and deep learning approaches towards them.
Wavelet scattering transforms have been shown to be effective in limited labeled data settings. The wavelet scattering transform creates geometric invariants and deformation stability. In multiple signal domains, it has been shown to yield more discriminative representations than other non-learned representations and to outperform learned representations in certain tasks, particularly on limited labeled data and highly structured signals. The wavelet filters used in the scattering transform are typically selected to create a tight frame via a parameterized mother wavelet. In this work, we investigate whether this standard wavelet filterbank construction is optimal. Focusing on Morlet wavelets, we propose to learn the scales, orientations, and aspect ratios of the filters to produce problem-specific parameterizations of the scattering transform. We call our approach the Parametric Scattering Network. We illustrate that filters learned by parametric scattering networks can be interpreted according to the specific task on which they are trained. We also empirically demonstrate that our parametric scattering transforms share similar stability to deformations as the traditional scattering transforms. We also show that our approach yields significant performance gains in small-sample classification settings over the standard scattering transform. Moreover, our empirical results suggest that traditional filterbank constructions may not always be necessary for scattering transforms to extract useful representations.
|
Page generated in 0.0851 seconds