• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 878
  • 201
  • 126
  • 110
  • 73
  • 25
  • 17
  • 16
  • 7
  • 6
  • 6
  • 5
  • 4
  • 4
  • 4
  • Tagged with
  • 1729
  • 412
  • 311
  • 245
  • 228
  • 184
  • 174
  • 167
  • 166
  • 156
  • 155
  • 152
  • 152
  • 150
  • 141
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
521

Utilisation d'une assimilation d'ensemble pour modéliser des covariances d'erreur d'ébauche dépendantes de la situation météorologique à échelle convective / Use of an ensemble data assimilation to model flow-dependent background error covariances a convective scale

Ménétrier, Benjamin 03 July 2014 (has links)
L'assimilation de données vise à fournir aux modèles de prévision numérique du temps un état initial de l'atmosphère le plus précis possible. Pour cela, elle utilise deux sources d'information principales : des observations et une prévision récente appelée "ébauche", toutes deux entachées d'erreurs. La distribution de ces erreurs permet d'attribuer un poids relatif à chaque source d'information, selon la confiance que l'on peut lui accorder, d'où l'importance de pouvoir estimer précisément les covariances de l'erreur d'ébauche. Les méthodes de type Monte-Carlo, qui échantillonnent ces covariances à partir d'un ensemble de prévisions perturbées, sont considérées comme les plus efficaces à l'heure actuelle. Cependant, leur coût de calcul considérable limite de facto la taille de l'ensemble. Les covariances ainsi estimées sont donc contaminées par un bruit d'échantillonnage, qu'il est nécessaire de filtrer avant toute utilisation. Cette thèse propose des méthodes de filtrage du bruit d'échantillonnage dans les covariances d'erreur d'ébauche pour le modèle à échelle convective AROME de Météo-France. Le premier objectif a consisté à documenter la structure des covariances d'erreur d'ébauche pour le modèle AROME. Une assimilation d'ensemble de grande taille a permis de caractériser la nature fortement hétérogène et anisotrope de ces covariances, liée au relief, à la densité des observations assimilées, à l'influence du modèle coupleur, ainsi qu'à la dynamique atmosphérique. En comparant les covariances estimées par deux ensembles indépendants de tailles très différentes, le bruit d'échantillonnage a pu être décrit et quantifié. Pour réduire ce bruit d'échantillonnage, deux méthodes ont été développées historiquement, de façon distincte : le filtrage spatial des variances et la localisation des covariances. On montre dans cette thèse que ces méthodes peuvent être comprises comme deux applications directes du filtrage linéaire des covariances. L'existence de critères d'optimalité spécifiques au filtrage linéaire de covariances est démontrée dans une seconde partie du travail. Ces critères présentent l'avantage de n'impliquer que des grandeurs pouvant être estimées de façon robuste à partir de l'ensemble. Ils restent très généraux et l'hypothèse d'ergodicité nécessaire à leur estimation n'est requise qu'en dernière étape. Ils permettent de proposer des algorithmes objectifs de filtrage des variances et pour la localisation des covariances. Après un premier test concluant dans un cadre idéalisé, ces nouvelles méthodes ont ensuite été évaluées grâce à l'ensemble AROME. On a pu montrer que les critères d'optimalité pour le filtrage homogène des variances donnaient de très bons résultats, en particulier le critère prenant en compte la non-gaussianité de l'ensemble. La transposition de ces critères à un filtrage hétérogène a permis une légère amélioration des performances, à un coût de calcul plus élevé cependant. Une extension de la méthode a ensuite été proposée pour les composantes du tenseur de la hessienne des corrélations locales. Enfin, les fonctions de localisation horizontale et verticale ont pu être diagnostiquées, uniquement à partir de l'ensemble. Elles ont montré des variations cohérentes selon la variable et le niveau concernés, et selon la taille de l'ensemble. Dans une dernière partie, on a évalué l'influence de l'utilisation de variances hétérogènes dans le modèle de covariances d'erreur d'ébauche d'AROME, à la fois sur la structure des covariances modélisées et sur les scores des prévisions. Le manque de réalisme des covariances modélisées et l'absence d'impact positif pour les prévisions soulèvent des questions sur une telle approche. Les méthodes de filtrage développées au cours de cette thèse pourraient toutefois mener à d'autres applications fructueuses au sein d'approches hybrides de type EnVar, qui constituent une voie prometteuse dans un contexte d'augmentation de la puissance de calcul disponible. / Data assimilation aims at providing an initial state as accurate as possible for numerical weather prediction models, using two main sources of information : observations and a recent forecast called the “background”. Both are affected by systematic and random errors. The precise estimation of the distribution of these errors is crucial for the performance of data assimilation. In particular, background error covariances can be estimated by Monte-Carlo methods, which sample from an ensemble of perturbed forecasts. Because of computational costs, the ensemble size is much smaller than the dimension of the error covariances, and statistics estimated in this way are spoiled with sampling noise. Filtering is necessary before any further use. This thesis proposes methods to filter the sampling noise of forecast error covariances. The final goal is to improve the background error covariances of the convective scale model AROME of Météo-France. The first goal is to document the structure of background error covariances for AROME. A large ensemble data assimilation is set up for this purpose. It allows to finely characterize the highly heterogeneous and anisotropic nature of covariances. These covariances are strongly influenced by the topography, by the density of assimilated observations, by the influence of the coupling model, and also by the atmospheric dynamics. The comparison of the covariances estimated from two independent ensembles of very different sizes gives a description and quantification of the sampling noise. To damp this sampling noise, two methods have been historically developed in the community : spatial filtering of variances and localization of covariances. We show in this thesis that these methods can be understood as two direct applications of the theory of linear filtering of covariances. The existence of specific optimality criteria for the linear filtering of covariances is demonstrated in the second part of this work. These criteria have the advantage of involving quantities that can be robustly estimated from the ensemble only. They are fully general and the ergodicity assumption that is necessary to their estimation is required in the last step only. They allow the variance filtering and the covariance localization to be objectively determined. These new methods are first illustrated in an idealized framework. They are then evaluated with various metrics, thanks to the large ensemble of AROME forecasts. It is shown that optimality criteria for the homogeneous filtering of variances yields very good results, particularly with the criterion taking the non-gaussianity of the ensemble into account. The transposition of these criteria to a heterogeneous filtering slightly improves performances, yet at a higher computational cost. An extension of the method is proposed for the components of the local correlation hessian tensor. Finally, horizontal and vertical localization functions are diagnosed from the ensemble itself. They show consistent variations depending on the considered variable and level, and on the ensemble size. Lastly, the influence of using heterogeneous variances into the background error covariances model of AROME is evaluated. We focus first on the description of the modelled covariances using these variances and then on forecast scores. The lack of realism of the modelled covariances and the negative impact on scores raise questions about such an approach. However, the filtering methods developed in this thesis are general. They are likely to lead to other prolific applications within the framework of hybrid approaches, which are a promising way in a context of growing computational resources.
522

Control of Markov Jump Linear Systems with uncertain detections. / Controle de sistemas com saltos markovianos e detecções sujeitas a incertezas.

Stadtmann, Frederik 02 April 2019 (has links)
This monograph addresses control and filtering problems for systems with sudden changes in their behavior and whose changes are detected and estimated by an imperfect detector. More precisely it considers continuous-timeMarkov Jump Linear Systems (MJLS) where the current mode of operation is estimated by a detector. This detector is assumed to be imperfect in the sense that it is possible that the detected mode of operation diverges from the real mode of operation. Furthermore the probabilities for these detections are considered to be known. It is assumed that the detector has its own dynamic, which means that the detected mode of information can change independently from the real mode of operation. The novelty of this approach lies in how uncertainties are modeled. A Hidden Markov Model (HMM) is used to model the uncertainties introduced by the detector. For these systems the following problems are addressed: i) Stochastic Stabilizability in mean-square sense, ii) H2 control, iii) H? control and iv) the H? filtering problem. Solutions based on Linear Matrix Inequalities (LMI) are developed for each of these problems. In case of the H2 control problem, the solutionminimizes an upper bound for the H2 norm of the closed-loop control system. For the H? control problem a solution is presented that minimizes an upper bound for the H? norm of the closed-loop control system. In the case of the H? filtering, the solution presented minimizes the H? norm of a system representing the estimation error. The solutions for the control problems are illustrated using a numerical example modeling a simple two-tank process. / Esta monografia aborda problemas de controle e filtragem em sistemas com saltos espontâneos que alteram seu comportamento e cujas mudanças são detectadas e estimadas por um detector imperfeito. Mais precisamente, consideramos sistemas lineares cujos saltos podem ser modelados usando um processo markoviano (Markov Jump Linear Systems) e cujo modo de operação corrente é estimado por um detector. O detector é considerado imperfeito tendo em vista a possibilidade de divergência entre o modo real de operação e o modo de operação detectado. Ademais, as probabilidades das deteccões são consideradas conhecidas. Assumimos que o detector possui uma dinâmica própria, o que significa que o modo de operação detectado pode mudar independentemente do modo real de operação. A novidade dessa abordagem está na modelagem das incertezas. Um processo oculto de Markov (HMM) é usado para modelar as incertezas introduzidas pelo detector. Para esses sistemas, os seguintes problemas são abordados: i) estabilidade quadrática ii) controle H2, iii) controle H? e iv) o problema da filtragem H?. Soluções baseadas em Desigualdades de Matriciais Lineares (LMI) são desenvolvidas para cada um desses problemas. No caso do problema de controle H2, a solução minimiza um limite superior para a norma H2 do sistema de controle em malha fechada. Para o problema H? -controle é apresentada uma solução que minimiza um limite superior para a norma H? do sistema de controle em malha fechada. No caso da filtragem H?, a solução apresentada minimiza a norma H? de um sistema que representa o erro de estimativa. As soluções para os problemas de controle são ilustradas usando um exemplo numérico que modela um processo simples de dois tanques.
523

[en] NEURAL NETWORKS IN THE IDENTIFICATION OF COMMERCIAL LOSSES OF THE ELECTRICAL SECTOR / [pt] REDES NEURAIS NA IDENTIFICAÇÃO DE PERDAS COMERCIAIS DO SETOR ELÉTRICO

GUSTAVO VICTOR CHAVEZ ORTEGA 16 April 2009 (has links)
[pt] Atualmente, um dos maiores problemas das empresas brasileiras distribuidoras de energia elétrica é o de perdas comerciais, responsáveis pela maior parte das perdas do setor. A Light, por exemplo, é a terceira distribuidora com maiores perdas comerciais no Brasil, com 3,79 milhões de clientes de baixa tensão em 31 municí­pios do Estado do Rio de Janeiro. Estas perdas são causadas por fraudes nos medidores de energia, por equipamentos defeituosos e, principalmente, pelas ligações clandestinas, conhecidas por gatos, gambiarras ou macacos. Uma forma tradicional de combate às Perdas Comerciais é a realização de inspeções nos consumidores. Entretanto, a seleção de quais consumidores devem ser inspecionados é uma tarefa árdua para os especialistas no assunto. As distribuidoras geralmente empregam um conjunto de metodologias heurí­sticas para identificar os clientes de baixa tensão suspeitos de estarem cometendo algum tipo de irregularidade. Todavia, a média de acertos dessas metodologias ainda é bastante inferior ao desejado, acarretando prejuízos elevados para as distribuidoras brasileiras. No caso especí­fico da Light, a média de acerto na comprovação de clientes fraudadores é de apenas 25%. Verifica-se, portanto, que o processo adotado não é eficiente. Portanto, este trabalho tem como objetivo desenvolver uma metodologia que identifique, com maior precisão, o perfil do cliente irregular (comprovada fraude no medidor, furto por ligação clandestina ou irregularidade técnica). O sistema inteligente resultante, denominado SIIPERCOM, baseia-se em Redes Neurais, para a filtragem agrupando clientes com comportamentos semelhantes e classificação dos clientes de cada grupo em normais ou irregulares. / [en] Currently, one of the biggest problems of Brazilian companies distributing electrical power is the loss commercial, responsible for most of the losses in the sector. The Light, for example, is the third largest distributor with commercial losses in Brazil, with 3.79 million clients of low voltage in 31 municipalities in the State of Rio de Janeiro. These losses are caused by fraud in the energy meters, for defective equipment, and principally by illegal connections, known as cats, stage lights or monkeys. The traditional form to combat to the commercial losses is the realization of inspections on consumers. However, the selection of which consumers should be inspected is an arduous task to specialists in the subject. The distributors usually employ a range of methodologies heuristics to identify customers with low voltage suspected to be committing some type of irregularity. However, the average of correct these methodologies is still much lower than desired, causing heavy losses to Brazilian distributors. In the specific case of Light, the average hit the evidence of customers fraudsters is only 25%. It appears therefore that the process adopted is not efficient. Therefore, this study aims to develop a methodology to identify, with greater precision, the irregular profile of the customer (meter was proven fraud, theft by illegal connection or technical irregularity). The resulting intelligent system, called SIIPERCOM, based on Neural Networks, for the 'filtering' grouping customers with similar behaviors and classification of the customers of each group in normal or irregular.
524

[en] MATRIX FACTORIZATION MODELS FOR VIDEO RECOMMENDATION / [pt] MODELOS DE FATORAÇÃO MATRICIAL PARA RECOMENDAÇÃO DE VÍDEOS

BRUNO DE FIGUEIREDO MELO E SOUZA 14 March 2012 (has links)
[pt] A recomendação de itens a partir do feedback implícito dos usuários consiste em identificar padrões no interesse dos usuários por estes itens a partir de ações dos usuários, tais como cliques, interações ou o consumo de conteúdos específicos. Isso, de forma a prover sugestões personalizadas que se adéquem ao gosto destes usuários. Nesta dissertação, avaliamos a performance de alguns modelos de fatoração matricial otimizados para a tarefa de recomendação a partir de dados implícitos no consumo das ofertas de vídeos da Globo.com. Propusemos tratar estes dados de consumo como indicativos de intenção de um usuário em assistir um vídeo. Além disso, avaliamos como os vieses únicos dos usuários e vídeos, e sua variação temporal impactam o resultado das recomendações. Também sugerimos a utilização de um modelo de fatoração incremental otimizado para este problema, que escala linearmente com o tamanho da entrada, isto é, com os dados de visualizações e quantidade de variáveis latentes. Na tarefa de prever a intenção dos usuários em consumir um conteúdo novo, nosso melhor modelo de fatoração apresenta um RMSE de 0,0524 usando o viés de usuários e vídeos, assim como sua variação temporal. / [en] Item recommendation from implicit feedback datasets consists of passively tracking different sorts of user behavior, such as purchase history, watching habits and browsing activities in order to improve customer experience through providing personalized recommendations that fits into users taste. In this work we evaluate the performance of different matrix factorization models tailored for the recommendation task for the implicit feedback dataset extracted from Globo.com s video site s access logs. We propose treating the data as indication of a positive preference from a user regarding the video watched. Besides that we evaluated the impact of effects associated with either users or items, known as biases or intercepts, independent of any interactions and its time changing behavior throughout the life span of the data in the result of recommendations. We also suggest a scalable and incremental procedure, which scales linearly with the input data size. In trying to predict the intention of the users for consuming new videos our best factorization models achieves a RMSE of 0,0524 using user s and video s bias as well as its temporal dynamics.
525

Filtrage, segmentation et suivi d'images échographiques : applications cliniques / Filtering, Segmentation and ultrasound images tracking. : clinical applications.

Dahdouh, Sonia 23 September 2011 (has links)
La réalisation des néphrolithotomies percutanées est essentiellement conditionnée par la qualité dela ponction calicièle préalable. En effet, en cas d’échec de celle-ci, l’intervention ne peut avoir lieu.Réalisée le plus souvent sous échographie, sa qualité est fortement conditionnée par celle du retouréchographique, considéré comme essentiel par la deuxième consultation internationale sur la lithiase pour limiter les saignements consécutifs à l’intervention.L’imagerie échographique est largement plébiscitée en raison de son faible coût, de l’innocuité del’examen, liée à son caractère non invasif, de sa portabilité ainsi que de son excellente résolutiontemporelle ; elle possède toutefois une très faible résolution spatiale et souffre de nombreux artefacts tels que la mauvaise résolution des images, un fort bruit apparent et une forte dépendance àl’opérateur.L’objectif de cette thèse est de concevoir une méthode de filtrage des données échographiques ainsiqu’une méthode de segmentation et de suivi du rein sur des séquences ultrasonores, dans le butd’améliorer les conditions d’exécution d’interventions chirurgicales telles que les néphrolithotomiespercutanées.Le filtrage des données, soumis et publié dans SPIE 2010, est réalisé en exploitant le mode deformation des images : le signal radiofréquence est filtré directement, avant même la formation del’image 2D finale. Pour ce faire, nous utilisons une méthode basée sur les ondelettes, en seuillantdirectement les coefficients d’ondelettes aux différentes échelles à partir d’un algorithme de typesplit and merge appliqué avant reconstruction de l’image 2D.La méthode de suivi développée (une étude préliminaire a été publiée dans SPIE 2009), exploiteun premier contour fourni par le praticien pour déterminer, en utilisant des informations purementlocales, la position du contour sur l’image suivante de la séquence. L’image est transformée pourne plus être qu’un ensemble de vignettes caractérisées par leurs critères de texture et une premièresegmentation basée région est effectuée sur cette image des vignettes. Cette première étape effectuée, le contour de l’image précédente de la séquence est utilisé comme initialisation afin de recalculer le contour de l’image courante sur l’image des vignettes segmentée. L’utilisation d’informations locales nous a permis de développer une méthode facilement parallélisable, ce qui permettra de travailler dans une optique temps réel.La validation de la méthode de filtrage a été réalisée sur des signaux radiofréquence simulés. Laméthode a été comparée à différents algorithmes de l’état de l’art en terme de ratio signal sur bruitet de calcul de USDSAI. Les résultats ont montré la qualité de la méthode proposée comparativement aux autres. La méthode de segmentation, quant-à elle, a été validée sans filtrage préalable, sur des séquences 2D réelles pour un temps d’exécution sans optimisation, inférieur à la minute pour des images 512*512. / The achievement of percutaneous nephrolithotomies is mainly conditioned by the quality of the initial puncture. Indeed, if it is not well performed , the intervention cannot be fulfilled.In order to make it more accurate this puncture is often realized under ultrasound control. Thus the quality of the ultrasound feedback is very critical and when clear enough it greatly helps limiting bleeding.Thanks to its low cost, its non invasive nature and its excellent temporal resolution, ultrasound imaging is considered very appropriate for this purpose. However, this solution is not perfect it is characterized by a low spatial resolution and the results present artifacts due to a poor image resolution (compared to images provided by some other medical devices) and speckle noise.Finally this technic is greatly operator dependent.Aims of the work presented here are, first to design a filtering method for ultrasound data and then to develop a segmentation and tracking algorithm on kidney ultrasound sequences in order to improve the executing conditions of surgical interventions such as percutaneous nephrolithotomies.The results about data filtering was submitted and published in SPIE 2010. The method uses the way ultrasound images are formed to filter them: the radiofrequency signal is directly filtered, before the bi-dimensional reconstruction. In order to do so, a wavelet based method, thresholding directly wavelet coefficients at different scales has been developed. The method is based on a “split and merge” like algorithm.The proposed algorithm was validated on simulated signals and its results compared to the ones obtained with different state of the art algorithms. Experiments show that this new proposed approach is better.The segmentation and tracking method (of which a prospective study was published in SPIE 2009) uses a first contour given by a human expert and then determines, using only local informations, the position of the next contour on the following image of the sequence. The tracking technique was validated on real data with no previous filtering and successfully compared with state of the art methods.
526

Photorealistic Surface Rendering with Microfacet Theory / Rendu photoréaliste de surfaces avec la théorie des microfacettes

Dupuy, Jonathan 26 November 2015 (has links)
La synthèse d'images dites photoréalistes nécessite d'évaluer numériquement la manière dont la lumière et la matière interagissent physiquement, ce qui, malgré la puissance de calcul impressionnante dont nous bénéficions aujourd'hui et qui ne cesse d'augmenter, est encore bien loin de devenir une tâche triviale pour nos ordinateurs. Ceci est dû en majeure partie à la manière dont nous représentons les objets: afin de reproduire les interactions subtiles qui mènent à la perception du détail, il est nécessaire de modéliser des quantités phénoménales de géométries. Au moment du rendu, cette complexité conduit inexorablement à de lourdes requêtes d'entrées-sorties, qui, couplées à des évaluations d'opérateurs de filtrage complexes, rendent les temps de calcul nécessaires à produire des images sans défaut totalement déraisonnables. Afin de pallier ces limitations sous les contraintes actuelles, il est nécessaire de dériver une représentation multiéchelle de la matière. Dans cette thèse, nous construisons une telle représentation pour la matière dont l'interface correspond à une surface perturbée, une configuration qui se construit généralement via des cartes d'élévations en infographie. Nous dérivons notre représentation dans le contexte de la théorie des microfacettes (conçue à l'origine pour modéliser la réflectance de surfaces rugueuses), que nous présentons d'abord, puis augmentons en deux temps. Dans un premier temps, nous rendons la théorie applicable à travers plusieurs échelles d'observation en la généralisant aux statistiques de microfacettes décentrées. Dans l'autre, nous dérivons une procédure d'inversion capable de reconstruire les statistiques de microfacettes à partir de réponses de réflexion d'un matériau arbitraire dans les configurations de rétroréflexion. Nous montrons comment cette théorie augmentée peut être exploitée afin de dériver un opérateur général et efficace de rééchantillonnage approximatif de cartes d'élévations qui (a) préserve l'anisotropie du transport de la lumière pour n'importe quelle résolution, (b) peut être appliqué en amont du rendu et stocké dans des MIP maps afin de diminuer drastiquement le nombre de requêtes d'entrées-sorties, et (c) simplifie de manière considérable les opérations de filtrage par pixel, le tout conduisant à des temps de rendu plus courts. Afin de valider et démontrer l'efficacité de notre opérateur, nous synthétisons des images photoréalistes anticrenelées et les comparons à des images de référence. De plus, nous fournissons une implantation C++ complète tout au long de la dissertation afin de faciliter la reproduction des résultats obtenus. Nous concluons avec une discussion portant sur les limitations de notre approche, ainsi que sur les verrous restant à lever afin de dériver une représentation multiéchelle de la matière encore plus générale / Photorealistic rendering involves the numeric resolution of physically accurate light/matter interactions which, despite the tremendous and continuously increasing computational power that we now have at our disposal, is nowhere from becoming a quick and simple task for our computers. This is mainly due to the way that we represent objects: in order to reproduce the subtle interactions that create detail, tremendous amounts of geometry need to be queried. Hence, at render time, this complexity leads to heavy input/output operations which, combined with numerically complex filtering operators, require unreasonable amounts of computation times to guarantee artifact-free images. In order to alleviate such issues with today's constraints, a multiscale representation for matter must be derived. In this thesis, we derive such a representation for matter whose interface can be modelled as a displaced surface, a configuration that is typically simulated with displacement texture mapping in computer graphics. Our representation is derived within the realm of microfacet theory (a framework originally designed to model reflection of rough surfaces), which we review and augment in two respects. First, we render the theory applicable across multiple scales by extending it to support noncentral microfacet statistics. Second, we derive an inversion procedure that retrieves microfacet statistics from backscattering reflection evaluations. We show how this augmented framework may be applied to derive a general and efficient (although approximate) down-sampling operator for displacement texture maps that (a) preserves the anisotropy exhibited by light transport for any resolution, (b) can be applied prior to rendering and stored into MIP texture maps to drastically reduce the number of input/output operations, and (c) considerably simplifies per-pixel filtering operations, resulting overall in shorter rendering times. In order to validate and demonstrate the effectiveness of our operator, we render antialiased photorealistic images against ground truth. In addition, we provide C++ implementations all along the dissertation to facilitate the reproduction of the presented results. We conclude with a discussion on limitations of our approach, and avenues for a more general multiscale representation for matter
527

Processos de gentrificação / Gentrification processes

Viana, Guilherme David dos Santos 07 June 2017 (has links)
A presente dissertação traz reflexões sobre os processos de gentrificação combinando argumentações teóricas baseadas em modelos teóricos que preveem analises fundamentadas em oferta e demanda, e através da renda diferencial que possibilita a analise de um potencial de renda, unindose a teorias de deslocamento, teoria de filtragem e teoria do ciclo de vida familiar, assim como atrair para discussão teorias sobre centro e subcentros. Com esses conceitos sugere-se uma analise que transpasse o processo de gentrificação, observando qual a consequência do processo efetivado e com quais fenômenos ele pode vir a contribuir , assim como também, se observa quais fenômenos podem contribuir para que o processo de gentrificação ocorra. Após essas conceituações, apresentar-se-ão exemplos de processos de gentrificação, apresentados sobre a perspectiva de seus pesquisadores contribuindo, para uma compreensão mais abrangente sobre as causas e efeitos abordados no processo de gentrificação, através da percepção de suas características expostas em diversos casos que possuem espaços constituídos de forma única. Para assim conseguir-se uma base substancial na procura por indícios de processos da primeira e da segunda onda do processo de gentrifcação no município de São Paulo. / The present dissertation brings reflections on the processes of gentrification combining theoretical arguments based on theoretical models that foresee analyzes based on supply and demand, and through rent gap that allows the analysis of an income potential, joining theories of displacement, theory Filtering and family life cycle theory, as well as to attract theories about the center and subcenters for discussion. With these concepts, it is suggested an analysis that transcends the process of gentrification, observing the consequences of the actual process and with which phenomena it may contribute, as well as observing which phenomena may contribute to the process of gentrification occurring. After these conceptualizations, we will present examples of gentrification processes, presented on the perspective of their researchers, contributing to a more comprehensive understanding of the causes and effects addressed in the gentrification process, through the perception of their characteristics exposed in several cases which have uniquely shaped spaces. In order to obtain a substantial base in the search for indications of processes of the first and second waves of the process of gentrification in the city of São Paulo.
528

Reconstrução tomográfica de imagens com rudo poisson: estimativa das projeções´. / Tomographic reconstruction of images with Poisson noise: projection estimation.

Furuie, Sérgio Shiguemi 06 July 1990 (has links)
A reconstrução tomográfica de imagens com ruído Poisson tem grandes aplicações em medicina nuclear. A demanda por informações mais complexas, como por exemplo, várias secções de um órgão, e a necessidade de reduzir a dosagem radioativa a que o paciente é submetido, requerem métodos adequados para a reconstrução de imagem com baixa contagem, no caso, baixa relação sinal/ruído. A abordagem estatística, utilizando a máxima verossimilhança (ML) e o algoritmo Expectation-Maximization (EM), produz melhores resultados do que os métodos tradicionais, pois incorpora a natureza estatística do ruído no seu modelo. A presente tese apresenta uma solução alternativa, considerando também o modelo de ruído Poisson, que produz resultados comparáveis ao do ML-EM, porém com custo computacional bem menor. A metodologia proposta consiste, basicamente, em se estimar as projeções considerando o modelo de formação das projeções ruidosas, antes do processo da reconstrução. São discutidos vários estimadores otimizados, inclusive Bayesianos. Em especial, é mostrado que a transformação de ruído Poisson em ruído aditivo Gaussiano e independente do sinal (transformação de Anscombe), conjugada à estimativa, produz bons resultados. Se as projeções puderem ser consideradas, aproximadamente, transformadas de Radon da imagem a ser reconstruída, então pode ser aplicado um dos métodos da transformada para a reconstrução tomográfica. Dentre estes métodos, o da aplicação direta da transformada de Fourier foi avaliado mais detalhadamente devido ao seu grande potencial para reconstruções rápidas com processamento vetorial e processamento paralelo. A avaliação do método proposto foi realizada através de simulações, onde foram geradas as imagens originais e as projeções com ruído Poisson. Os resultados foramcomparados com métodos clássicos como a filtragem-retroprojeção, o ART e o ML-EM. Em particular, a transformação de Anscombe conjungada ao estimador heurístico (filtro de Maeda), mostrou resultados próximos aos do ML-EM, porém com tempo de processamento bem menor. Os resultados obtidos mostram a viabilidade da presente proposta vir a ser utilizada em aplicações clínicas na medicina nuclear. / Tomographic reconstruction of images with Poisson noise is in important problem in nuclear medicine. The need for more complete information, like the reconstruction of several sections of an organ, and the necessity to reduce patient absorbed radioactivity, suggest better methods to reconstruct images with low-count and low signal-to-noise ratio. Statistical approaches using Maximum Likelihood (ML) and the Expectation-Maximization (EM) algorithm lead to better results than classical methods, since ML-EM considers in its model the stochastic nature of the noise. This thesis presents an alternative solution, also using a Poisson noise model, that produces similar results as compared to ML-EM, but with much less computational cost. The proposed technique basically consists of projection estimation before reconstruction, taking into account a model for the formation of the noisy projections. Several optimal and Bayesian estimators are analysed. It is shown that the transformation of Poisson noise into Gaussian additive and independent noise (Anscombe Transformation), followed by estimation, yields good results. If the projection can be assumed as Radon transform of the image to be reconstructed, then it is possible to reconstruct using one of the transform methods. Among these methods, the Direct Fourier Method was analysed in detail, due to its applicability for fast reconstruction using array processors and parallel processing. Computer simulations were used in order to access this proposed technique. Phantoms and phantom projections with Poisson noise were generated. The results were compared with traditional methods like Filtering-Backprojection, Algebraic Rconstruction Technique (ART) and ML-EM. Specifically, the Anscombe transformation together with a heuristic estimator (Maeda\'s filter) produced results comparable to ML-EM, but spending only a fraction of the processing time.
529

Um estudo sobre filtros IIR adaptativos com aplicação a uma estrutura polifásica. / A study about adaptive IIR filters with application to a polyphase structure.

Burt, Phillip Mark Seymour 11 April 1997 (has links)
Neste trabalho faz-se um estudo sobre filtros IIR adaptativos e é apresentada uma estrutura polifásica para filtragem IIR adaptativa, que, em troca de um aumento de complexidade computacional, pode apresentar características mais favoráveis do que a estrutura direta comumente usada. O aumento da complexidade computacional, relativamente a um algoritmo do tipo newton, por exemplo, é pequeno. Apresenta-se uma análise dos efeitos da proximidade ao círculo unitário dos pólos do sistema sendo modelado. Um dos efeitos considerados é o comportamento limite do condicionamento da matriz de estados associada ao algoritmo de adaptação. São considerados algoritmos de adaptação de passo constante de uso comum para filtros IIR adaptativos. O método utilizado é particularmente útil para a verificação do efeito da posição dos pólos do sistema sendo modelado e também para a introdução de certas restrições ao mesmo, como, por exemplo, norma L2 fixa e resposta em freqüência passa-tudo. Um resultado interessante é que a única situação, entre as testadas, em que o condicionamento da matriz mencionada não tende a infinito quando um número qualquer de polosndo sistema sendo modelado H(z) se aproxima da circunferência unitária, é quando H(z) é passa-tudo e emprega-se o algoritmo PLR. São analisadas também a superfície de erro e a superfície de erro reduzida para filtros IIR adaptativos. Mostra-se que, quando o sistema sendo modelado possui polos próximos à circunferência unitária, a superfície de erro reduzida apresenta regiões planas com erro quadrático médio elevado. A existência destas regiões resulta em uma baixa velocidade de convergência global de algoritmos de passo constante. A partir da decomposição em valores singulares (SVD) da forma de Hankel do sistema sendo modelado, é apresentada também uma decomposição da superfície de erro reduzida, a partir da qual pode-se obter uma separaçãoparcial dos efeitos do sistema sendo modelado e da forma de realização do filtro adaptativo. Uma estrutura polifásica para filtragem IIR adaptativa é apresentada e seu desempenho é comparado com o de filtros IIR adaptativos na forma direta. Mostra-se o possível ganho da estrutura polifásica quanto à velocidade de convergência local e quanto às características da superfície de erro reduzida e à velocidade de convergência global. Demonstra-se, para a estrutura polifásica, que, com entrada branca e modelamento suficiente, todos os pontos estacionários da superfície de erro são mínimos globais da mesma. Este resultado não decorre diretamente de propriedades análogas relativas à estrutura direta, já conhecidas. Tudo para a estrutura direta quanto para a estrutura polifásica, são apresentados os resultados de várias simulações dos algoritmos de adaptação considerados. / A study on IRR adaptive filters and polyphase structure for IIR adaptive filtering are presented. In exchange for an increase in computational complexity, which is small if compared to Newton algorithms, the polyphaser structure may exhibit a better performance than direct structures. An analysis of the effects of the proximity to the unit circle of the modelled system\'s poles is presented. One of the considered points is the limiting behavior of the condition of the state matrix related to the adaptive algorithm. Commonly used constant gain algorithms are considered. The method of analysis is specially usefull for verifying the effects of the position of the system\'s poles and also for introducing certain restrictions to the system, as fixed L2 norm and all-pass frequency response. An interesting result is that, among the situations that were tested, the only one in which the condition of the aforementioned matrix does not tend to infinity as the poles of the modelled system H(z) tend to the unit circle is when H)z) is all-pass and the PLR algorithm is employed. The error surface and the reduced error surface for IIR adaptive filters are also analyzed. It is shown that the modelled system has poles close to the unit circle the reduced error surface presents flat regions with high mean square error. The presence of these flat regions results in low global convergence speed for constant gain adaptive algorithms. Based on the singular value decomposition (SVD) of the modelled system\'s Hankel form, a decomposition of the reduced error surface is also presented. In it there exists a partial separation of the effects of the system and the adaptive filter\'s structure. A polyphaser structure for IIR adaptive filtering is presented and its performance is compared to the performance of the direct structure. The gain in local convergence and global convergencespeed, as well as the better behavior of the reduced error surface which may be attained , are shown. It is demonstrated, for the polyphaser structure, that, with while input and sufficient modelling, all the stationary points of the error surface are global minima. This result does not follow directly from similar well known results for the direct structure. Simulation results for the considered algorithms are also presented.
530

Filtros de Kalman robustos para sistemas dinâmicos singulares em tempo discreto / Robust Kalman filters for discrete-time singular systems

Bianco, Aline Fernanda 29 June 2009 (has links)
Esta tese trata do problema de estimativa robusta ótima para sistemas dinâmicos regulares discretos no tempo. Novos algoritmos recursivos são formulados para as estimativas filtradas e preditoras com as correspondentes equações de Riccati. O filtro robusto tipo Kalman e a equação de Riccati correspondente são obtidos numa formulação mais geral, estendendo os resultados apresentados na literatura. O funcional quadrático proposto para deduzir este filtro faz a combinação das técnicas mínimos quadrados regularizados e funções penalidade. O sistema considerado para obtenção de tais estimativas é singular, discreto, variante no tempo, com ruídos correlacionados e todos os parâmetros do modelo linear estão sujeitos a incertezas. As incertezas paramétricas são limitadas por norma. As propriedades de estabilidade e convergência do filtro de Kalman para sistemas nominais e incertos são provadas, mostrando-se que o filtro em estado permanente é estável e a recursão de Riccati associada a ele é uma sequência monótona não decrescente, limitada superiormente pela solução da equação algébrica de Riccati. / This thesis considers the optimal robust estimates problem for discrete-time singular dymanic systems. New recursive algorithms are developed for the Kalman filtered and predicted estimated recursions with the corresponding Riccati equations. The singular robust Kalman type filter and the corresponding recursive Riccati equation arer obtained in their most general formulation, extending the results presented in the literature. The quadratic functional developed to deduce this filter combines regularized least squares and penalty functions approaches. The system considered to obtain the estimates is singular, time varying with correlated noises and all parameter matrices of the underlying linear model are subject to uncertainties. The parametric uncertainty is assumed to be norm bounded. The properties of stability and convergence of the Kalman filter for nominal and uncertain system models are proved, where we show that steady state filter is stable and the Riccati recursion associated with this is a nondecreasing monotone sequence with upper bound.

Page generated in 0.0784 seconds