• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 28
  • 17
  • 6
  • 3
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 65
  • 65
  • 33
  • 11
  • 11
  • 9
  • 9
  • 8
  • 8
  • 7
  • 6
  • 6
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Contribui??es para a an?lise de sinais neuronais e biom?dicos

Santos, V?tor Lopes dos 03 March 2011 (has links)
Made available in DSpace on 2014-12-17T14:55:49Z (GMT). No. of bitstreams: 1 VitorLS_DISSERT.pdf: 1833534 bytes, checksum: 72ebc7d9d8be6ba8ae53eaad106afa8d (MD5) Previous issue date: 2011-03-03 / Conselho Nacional de Desenvolvimento Cient?fico e Tecnol?gico / Following the new tendency of interdisciplinarity of modern science, a new field called neuroengineering has come to light in the last decades. After 2000, scientific journals and conferences all around the world have been created on this theme. The present work comprises three different subareas related to neuroengineering and electrical engineering: neural stimulation; theoretical and computational neuroscience; and neuronal signal processing; as well as biomedical engineering. The research can be divided in three parts: (i) A new method of neuronal photostimulation was developed based on the use of caged compounds. Using the inhibitory neurotransmitter GABA caged by a ruthenium complex it was possible to block neuronal population activity using a laser pulse. The obtained results were evaluated by Wavelet analysis and tested by non-parametric statistics. (ii) A mathematical method was created to identify neuronal assemblies. Neuronal assemblies were proposed as the basis of learning by Donald Hebb remain the most accepted theory for neuronal representation of external stimuli. Using the Marcenko-Pastur law of eigenvalue distribution it was possible to detect neuronal assemblies and to compute their activity with high temporal resolution. The application of the method in real electrophysiological data revealed that neurons from the neocortex and hippocampus can be part of the same assembly, and that neurons can participate in multiple assemblies. (iii) A new method of automatic classification of heart beats was developed, which does not rely on a data base for training and is not specialized in specific pathologies. The method is based on Wavelet decomposition and normality measures of random variables. Throughout, the results presented in the three fields of knowledge represent qualification in neural and biomedical engineering / Following the new tendency of interdisciplinarity of modern science, a new field called neuroengineering has come to light in the last decades. After 2000, scientific journals and conferences all around the world have been created on this theme. The present work comprises three different subareas related to neuroengineering and electrical engineering: neural stimulation; theoretical and computational neuroscience; and neuronal signal processing; as well as biomedical engineering. The research can be divided in three parts: (i) A new method of neuronal photostimulation was developed based on the use of caged compounds. Using the inhibitory neurotransmitter GABA caged by a ruthenium complex it was possible to block neuronal population activity using a laser pulse. The obtained results were evaluated by Wavelet analysis and tested by non-parametric statistics. (ii) A mathematical method was created to identify neuronal assemblies. Neuronal assemblies were proposed as the basis of learning by Donald Hebb remain the most accepted theory for neuronal representation of external stimuli. Using the Marcenko-Pastur law of eigenvalue distribution it was possible to detect neuronal assemblies and to compute their activity with high temporal resolution. The application of the method in real electrophysiological data revealed that neurons from the neocortex and hippocampus can be part of the same assembly, and that neurons can participate in multiple assemblies. (iii) A new method of automatic classification of heart beats was developed, which does not rely on a data base for training and is not specialized in specific pathologies. The method is based on Wavelet decomposition and normality measures of random variables. Throughout, the results presented in the three fields of knowledge represent qualification in neural and biomedical engineering
52

Quantificação de risco operacional

Aaltonen, Alex 17 October 2012 (has links)
Submitted by Alex Aaltonen (aaltonenalex@gmail.com) on 2012-11-13T15:27:19Z No. of bitstreams: 1 Tese_em_versão_final_20121113_Título_Curto.pdf: 1228034 bytes, checksum: d53581eb0c3b7a27074aabe3b5db3550 (MD5) / Approved for entry into archive by Suzinei Teles Garcia Garcia (suzinei.garcia@fgv.br) on 2012-11-13T17:17:41Z (GMT) No. of bitstreams: 1 Tese_em_versão_final_20121113_Título_Curto.pdf: 1228034 bytes, checksum: d53581eb0c3b7a27074aabe3b5db3550 (MD5) / Made available in DSpace on 2012-11-13T17:34:20Z (GMT). No. of bitstreams: 1 Tese_em_versão_final_20121113_Título_Curto.pdf: 1228034 bytes, checksum: d53581eb0c3b7a27074aabe3b5db3550 (MD5) Previous issue date: 2012-10-17 / Measuring operational risk is necessary as it affects the value and survival of companies. A central focus of researchers, professionals in the financial sector, regulators, and bank supervisors involves controlling this risk. For this study, we explored four applications of the loss distribution approach for quantifying operational risk. A set of operating losses spanning two years at a major Brazilian bank was used for the purpose of applying and testing this approach based on the four methods. The empirical distribution method was found to be the most appropriate for measuring operational risk and calculating economic capital from the available data. The operational risk quantification method based on fitting theoretical distibutions to losses revealed that the Johnson curves are particularly flexible and readily implemented. Further, the Johnson curves were fitted to the distribution of operational losses and to the empirical distribution of the economic capital amounts. Knowing the capital distribution provides us with a notion of the economic capital calculation accuracy and prepares the way for future theoretical studies on operational VaR. Rather than calculating a single capital amount, we determined the distribution of economic capital amounts. We compared two methods, used to establish capital amount distributions for the bank. Our study demonstrated the possibility of justifying verification points in internal audit procedures on the basis of operational risk data, modeling, and management. Based on these findings, we concluded by setting out recommendations for bank supervision and regulation. / Risco operacional precisa ser mensurado pois afeta o valor e a sobrevivência das empresas. Ocupa o foco de pesquisadores, profissionais do sistema financeiro, reguladores e supervisores bancários, no esforço de controlar esse risco. Pesquisamos quatro formas de utilizar a abordagem de distribuição de perdas para a quantificação do risco operacional. Utilizamos um conjunto de dois anos de perdas operacionais de um banco brasileiro de grande porte para fazermos a aplicação e o teste dessa abordagem em quatro variantes. A variante que utiliza exclusivamente distribuições empíricas foi a mais adequada para medir o risco operacional e calcular o capital econômico a partir dos dados disponíveis. Na variante que quantifica o risco operacional ajustando distribuições teóricas às perdas, mostramos que as curvas de Johnson são especialmente flexíveis e de pronta implementação. Também, ajustamos as curvas de Johnson à distribuição de perdas operacionais e à distribuição amostral dos valores de capital econômico. Conhecer a distribuição do capital permite que tenhamos ideia da precisão com que estimamos o capital econômico e abre o caminho para futuros estudos teóricos da distribuição do Var operacional. Encontramos a distribuição dos valores de capital econômico ao invés de calcularmos um valor único de capital. Comparamos dois métodos, utilizados para estabelecer distribuições de valores de capital. Ao conduzirmos a pesquisa, notamos ser possível justificarmos pontos de verificação da auditoria interna sobre a base de dados, a modelagem e a gestão de risco operacional. Com a pesquisa produzimos sugestões para a supervisão e regulação dos bancos.
53

Text-Based Information Retrieval Using Relevance Feedback

Krishnan, Sharenya January 2011 (has links)
Europeana, a freely accessible digital library with an idea to make Europe's cultural and scientific heritage available to the public was founded by the European Commission in 2008. The goal was to deliver a semantically enriched digital content with multilingual access to it. Even though they managed to increase the content of data they slowly faced the problem of retrieving information in an unstructured form. So to complement the Europeana portal services, ASSETS (Advanced Search Service and Enhanced Technological Solutions) was introduced with services that sought to improve the usability and accessibility of Europeana. My contribution is to study different text-based information retrieval models, their relevance feedback techniques and to implement one simple model. The thesis explains a detailed overview of the information retrieval process along with the implementation of the chosen strategy for relevance feedback that generates automatic query expansion. Finally, the thesis concludes with the analysis made using relevance feedback, discussion on the model implemented and then an assessment on future use of this model both as a continuation of my work and using this model in ASSETS.
54

Quality strategy and method for transmission : application to image / Évaluation de la qualité des images dans un contexte de transmission

Xie, Xinwen 10 January 2019 (has links)
Cette thèse porte sur l’étude des stratégies d’amélioration de la qualité d’image dans les systèmes de communication sans fil et sur la conception de nouvelles métriques d’évaluation de la qualité. Tout d'abord, une nouvelle métrique de qualité d'image à référence réduite, basée sur un modèle statistique dans le domaine des ondelettes complexes, a été proposée. Les informations d’amplitude et de phase relatives des coefficients issues de la transformée en ondelettes complexes sont modélisées à l'aide de fonctions de densité de probabilité. Les paramètres associés à ces fonctions constituent la référence réduite qui sera transmise au récepteur. Ensuite, une approche basée sur les réseaux de neurones à régression généralisée est exploitée pour construire la relation de cartographie entre les caractéristiques de la référence réduite et le score objectif.Deuxièmement, avec la nouvelle métrique, une nouvelle stratégie de décodage est proposée pour la transmission d’image sur un canal de transmission sans fil réaliste. Ainsi, la qualité d’expérience (QoE) est améliorée tout en garantissant une bonne qualité de service (QoS). Pour cela, une nouvelle base d’images a été construite et des tests d’évaluation subjective de la qualité de ces images ont été effectués pour collecter les préférences visuelles des personnes lorsqu’elles sélectionnent les images avec différentes configurations de décodage. Un classificateur basé sur les algorithmes SVM et des k plus proches voisins sont utilisés pour la sélection automatique de la meilleure configuration de décodage.Enfin, une amélioration de la métrique a été proposée permettant de mieux prendre en compte les spécificités de la distorsion et la préférence des utilisateurs. Pour cela, nous avons combiné les caractéristiques globales et locales de l’image conduisant ainsi à une amélioration de la stratégie de décodage.Les résultats expérimentaux valident l'efficacité des métriques de qualité d'image et des stratégies de transmission d’images proposées. / This thesis focuses on the study of image quality strategies in wireless communication systems and the design of new quality evaluation metrics:Firstly, a new reduced-reference image quality metric, based on statistical model in complex wavelet domain, has been proposed. The magnitude and the relative phase information of the Dual-tree Complex Wavelet Transform coefficients are modelled by using probability density function and the parameters served as reduced-reference features which will be transmitted to the receiver. Then, a Generalized Regression Neural Network approach is exploited to construct the mapping relation between reduced-reference feature and the objective score.Secondly, with the new metric, a new decoding strategy is proposed for a realistic wireless transmission system, which can improve the quality of experience (QoE) while ensuring the quality of service (QoS). For this, a new database including large physiological vision tests has been constructed to collect the visual preference of people when they are selecting the images with different decoding configurations, and a classifier based on support vector machine or K-nearest neighboring is utilized to automatically select the decoding configuration.Finally, according to specific property of the distortion and people's preference, an improved metric has been proposed. It is the combination of global feature and local feature and has been demonstrated having a good performance in optimization of the decoding strategy.The experimental results validate the effectiveness of the proposed image quality metrics and the quality strategies.
55

Approximations and Applications of Nonlinear Filters / Approximation und Anwendung nichtlinearer Filter

Bröcker, Jochen 30 January 2003 (has links)
No description available.
56

Segmenta??o Fuzzy de Texturas e V?deos

Santos, Tiago Souza dos 17 August 2012 (has links)
Made available in DSpace on 2014-12-17T15:48:04Z (GMT). No. of bitstreams: 1 TiagoSS_DISSERT.pdf: 2900373 bytes, checksum: ea7bd73351348f5c75a5bf4f337c599f (MD5) Previous issue date: 2012-08-17 / Conselho Nacional de Desenvolvimento Cient?fico e Tecnol?gico / The segmentation of an image aims to subdivide it into constituent regions or objects that have some relevant semantic content. This subdivision can also be applied to videos. However, in these cases, the objects appear in various frames that compose the videos. The task of segmenting an image becomes more complex when they are composed of objects that are defined by textural features, where the color information alone is not a good descriptor of the image. Fuzzy Segmentation is a region-growing segmentation algorithm that uses affinity functions in order to assign to each element in an image a grade of membership for each object (between 0 and 1). This work presents a modification of the Fuzzy Segmentation algorithm, for the purpose of improving the temporal and spatial complexity. The algorithm was adapted to segmenting color videos, treating them as 3D volume. In order to perform segmentation in videos, conventional color model or a hybrid model obtained by a method for choosing the best channels were used. The Fuzzy Segmentation algorithm was also applied to texture segmentation by using adaptive affinity functions defined for each object texture. Two types of affinity functions were used, one defined using the normal (or Gaussian) probability distribution and the other using the Skew Divergence. This latter, a Kullback-Leibler Divergence variation, is a measure of the difference between two probability distributions. Finally, the algorithm was tested in somes videos and also in texture mosaic images composed by images of the Brodatz album / A segmenta??o de uma imagem tem como objetivo subdividi-la em partes ou objetos constituintes que tenham algum conte?do sem?ntico relevante. Esta subdivis?o pode tamb?m ser aplicada a um v?deo, por?m, neste, os objetos est?o presentes nos diversos quadros que comp?em o v?deo. A tarefa de segmentar uma imagem torna-se mais complexa quando estas s?o compostas por objetos que contenham caracter?sticas texturais, com pouca ou nenhuma informa??o de cor. A segmenta??o difusa, do Ingl?s fuzzy, ? uma t?cnica de segmenta??o por crescimento de regi?es que determina para cada elemento da imagem um grau de pertin?ncia (entre zero e um) indicando a confian?a de que esse elemento perten?a a um determinado objeto ou regi?o existente na imagem, fazendo-se uso de fun??es de afinidade para obter esses valores de pertin?ncia. Neste trabalho ? apresentada uma modifica??o do algoritmo de segmenta??o fuzzy proposto por Carvalho [Carvalho et al. 2005], a fim de se obter melhorias na complexidade temporal e espacial. O algoritmo foi adaptado para segmentar v?deos coloridos tratando-os como volumes 3D. Para segmentar os v?deos, foram utilizadas informa??es provenientes de um modelo de cor convencional ou de um modelo h?brido obtido atrav?s de uma metodologia para a escolha dos melhores canais para realizar a segmenta??o. O algoritmo de segmenta??o fuzzy foi aplicado tamb?m na segmenta??o de texturas, fazendo-se uso de fun??es de afinidades adaptativas ?s texturas de cada objeto. Dois tipos de fun??es de afinidades foram utilizadas, uma utilizando a distribui??o normal de probabilidade, ou Gaussiana, e outra utilizando a diverg?ncia Skew. Esta ?ltima, uma varia??o da diverg?ncia de Kullback- Leibler, ? uma medida da diverg?ncia entre duas distribui??es de probabilidades. Por fim, o algoritmo foi testado com alguns v?deos e tamb?m com imagens de mosaicos de texturas criadas a partir do ?lbum de Brodatz e outros
57

自變數增加對岭估計的影響分析

萬世卿, Wan, Shin Chin Unknown Date (has links)
在最小平方估計中,當自變數間有共線性關係時,參數估計的變異變大,使得參數估計值不穩定。解決共線性對參數估計所造成影響的方法有很多,岭估計就是其中之一。在岭估計中,為了偵測出對岭估計有影響力的自變數,本文仿照Schall-Dunne的處理方式,推導出類似的Cook統計量及AP估計量,並且提出以Kullback-Leibler對稱散度來偵測對岭估計有影響力自變數。最後用"加拿大金融市場"與"員工對主管滿意度調查"的兩個實例,來說明本文所提出對岭估計有影響力自變數之偵測方法。
58

Estimation, validation et identification des modèles ARMA faibles multivariés

Boubacar Mainassara, Yacouba 28 November 2009 (has links) (PDF)
Dans cette thèse nous élargissons le champ d'application des modèles ARMA (AutoRegressive Moving-Average) vectoriels en considérant des termes d'erreur non corrélés mais qui peuvent contenir des dépendances non linéaires. Ces modèles sont appelés des ARMA faibles vectoriels et permettent de traiter des processus qui peuvent avoir des dynamiques non linéaires très générales. Par opposition, nous appelons ARMA forts les modèles utilisés habituellement dans la littérature dans lesquels le terme d'erreur est supposé être un bruit iid. Les modèles ARMA faibles étant en particulier denses dans l'ensemble des processus stationnaires réguliers, ils sont bien plus généraux que les modèles ARMA forts. Le problème qui nous préoccupera sera l'analyse statistique des modèles ARMA faibles vectoriels. Plus précisément, nous étudions les problèmes d'estimation et de validation. Dans un premier temps, nous étudions les propriétés asymptotiques de l'estimateur du quasi-maximum de vraisemblance et de l'estimateur des moindres carrés. La matrice de variance asymptotique de ces estimateurs est d'une forme "sandwich", et peut être très différente de la variance asymptotique obtenue dans le cas fort. Ensuite, nous accordons une attention particulière aux problèmes de validation. Dans un premier temps, en proposant des versions modifiées des tests de Wald, du multiplicateur de Lagrange et du rapport de vraisemblance pour tester des restrictions linéaires sur les paramètres de modèles ARMA faibles vectoriels. En second, nous nous intéressons aux tests fondés sur les résidus, qui ont pour objet de vérifier que les résidus des modèles estimés sont bien des estimations de bruits blancs. Plus particulièrement, nous nous intéressons aux tests portmanteau, aussi appelés tests d'autocorrélation. Nous montrons que la distribution asymptotique des autocorrelations résiduelles est normalement distribuée avec une matrice de covariance différente du cas fort (c'est-à-dire sous les hypothèses iid sur le bruit). Nous en déduisons le comportement asymptotique des statistiques portmanteau. Dans le cadre standard d'un ARMA fort, il est connu que la distribution asymptotique des tests portmanteau est correctement approximée par un chi-deux. Dans le cas général, nous montrons que cette distribution asymptotique est celle d'une somme pondérée de chi-deux. Cette distribution peut être très différente de l'approximation chi-deux usuelle du cas fort. Nous proposons donc des tests portmanteau modifiés pour tester l'adéquation de modèles ARMA faibles vectoriels. Enfin, nous nous sommes intéressés aux choix des modèles ARMA faibles vectoriels fondé sur la minimisation d'un critère d'information, notamment celui introduit par Akaike (AIC). Avec ce critère, on tente de donner une approximation de la distance (souvent appelée information de Kullback-Leibler) entre la vraie loi des observations (inconnue) et la loi du modèle estimé. Nous verrons que le critère corrigé (AICc) dans le cadre des modèles ARMA faibles vectoriels peut, là aussi, être très différent du cas fort.
59

Suivi d'objets d'intérêt dans une séquence d'images : des points saillants aux mesures statistiques

Vincent, Garcia 11 December 2008 (has links) (PDF)
Le problème du suivi d'objets dans une vidéo se pose dans des domaines tels que la vision par ordinateur (vidéo-surveillance par exemple) et la post-production télévisuelle et cinématographique (effets spéciaux). Il se décline en deux variantes principales : le suivi d'une région d'intérêt, qui désigne un suivi grossier d'objet, et la segmentation spatio-temporelle, qui correspond à un suivi précis des contours de l'objet d'intérêt. Dans les deux cas, la région ou l'objet d'intérêt doivent avoir été préalablement détourés sur la première, et éventuellement la dernière, image de la séquence vidéo. Nous proposons dans cette thèse une méthode pour chacun de ces types de suivi ainsi qu'une implémentation rapide tirant partie du Graphics Processing Unit (GPU) d'une méthode de suivi de régions d'intérêt développée par ailleurs.<br />La première méthode repose sur l'analyse de trajectoires temporelles de points saillants et réalise un suivi de régions d'intérêt. Des points saillants (typiquement des lieux de forte courbure des lignes isointensité) sont détectés dans toutes les images de la séquence. Les trajectoires sont construites en liant les points des images successives dont les voisinages sont cohérents. Notre contribution réside premièrement dans l'analyse des trajectoires sur un groupe d'images, ce qui améliore la qualité d'estimation du mouvement. De plus, nous utilisons une pondération spatio-temporelle pour chaque trajectoire qui permet d'ajouter une contrainte temporelle sur le mouvement tout en prenant en compte les déformations géométriques locales de l'objet ignorées par un modèle de mouvement global.<br />La seconde méthode réalise une segmentation spatio-temporelle. Elle repose sur l'estimation du mouvement du contour de l'objet en s'appuyant sur l'information contenue dans une couronne qui s'étend de part et d'autre de ce contour. Cette couronne nous renseigne sur le contraste entre le fond et l'objet dans un contexte local. C'est là notre première contribution. De plus, la mise en correspondance par une mesure de similarité statistique, à savoir l'entropie du résiduel, d'une portion de la couronne et d'une zone de l'image suivante dans la séquence permet d'améliorer le suivi tout en facilitant le choix de la taille optimale de la couronne.<br />Enfin, nous proposons une implémentation rapide d'une méthode de suivi de régions d'intérêt existante. Cette méthode repose sur l'utilisation d'une mesure de similarité statistique : la divergence de Kullback-Leibler. Cette divergence peut être estimée dans un espace de haute dimension à l'aide de multiples calculs de distances au k-ème plus proche voisin dans cet espace. Ces calculs étant très coûteux, nous proposons une implémentation parallèle sur GPU (grâce à l'interface logiciel CUDA de NVIDIA) de la recherche exhaustive des k plus proches voisins. Nous montrons que cette implémentation permet d'accélérer le suivi des objets, jusqu'à un facteur 15 par rapport à une implémentation de cette recherche nécessitant au préalable une structuration des données.
60

Neuronal Dissimilarity Indices that Predict Oddball Detection in Behaviour

Vaidhiyan, Nidhin Koshy January 2016 (has links) (PDF)
Our vision is as yet unsurpassed by machines because of the sophisticated representations of objects in our brains. This representation is vastly different from a pixel-based representation used in machine storages. It is this sophisticated representation that enables us to perceive two faces as very different, i.e, they are far apart in the “perceptual space”, even though they are close to each other in their pixel-based representations. Neuroscientists have proposed distances between responses of neurons to the images (as measured in macaque monkeys) as a quantification of the “perceptual distance” between the images. Let us call these neuronal dissimilarity indices of perceptual distances. They have also proposed behavioural experiments to quantify these perceptual distances. Human subjects are asked to identify, as quickly as possible, an oddball image embedded among multiple distractor images. The reciprocal of the search times for identifying the oddball is taken as a measure of perceptual distance between the oddball and the distractor. Let us call such estimates as behavioural dissimilarity indices. In this thesis, we describe a decision-theoretic model for visual search that suggests a connection between these two notions of perceptual distances. In the first part of the thesis, we model visual search as an active sequential hypothesis testing problem. Our analysis suggests an appropriate neuronal dissimilarity index which correlates strongly with the reciprocal of search times. We also consider a number of alternative possibilities such as relative entropy (Kullback-Leibler divergence), the Chernoff entropy and the L1-distance associated with the neuronal firing rate profiles. We then come up with a means to rank the various neuronal dissimilarity indices based on how well they explain the behavioural observations. Our proposed dissimilarity index does better than the other three, followed by relative entropy, then Chernoff entropy and then L1 distance. In the second part of the thesis, we consider a scenario where the subject has to find an oddball image, but without any prior knowledge of the oddball and distractor images. Equivalently, in the neuronal space, the task for the decision maker is to find the image that elicits firing rates different from the others. Here, the decision maker has to “learn” the underlying statistics and then make a decision on the oddball. We model this scenario as one of detecting an odd Poisson point process having a rate different from the common rate of the others. The revised model suggests a new neuronal dissimilarity index. The new dissimilarity index is also strongly correlated with the behavioural data. However, the new dissimilarity index performs worse than the dissimilarity index proposed in the first part on existing behavioural data. The degradation in performance may be attributed to the experimental setup used for the current behavioural tasks, where search tasks associated with a given image pair were sequenced one after another, thereby possibly cueing the subject about the upcoming image pair, and thus violating the assumption of this part on the lack of prior knowledge of the image pairs to the decision maker. In conclusion, the thesis provides a framework for connecting the perceptual distances in the neuronal and the behavioural spaces. Our framework can possibly be used to analyze the connection between the neuronal space and the behavioural space for various other behavioural tasks.

Page generated in 0.0471 seconds