Spelling suggestions: "subject:"image clustering"" "subject:"image klustering""
1 |
Model Calibration, Drainage Volume Calculation and Optimization in Heterogeneous Fractured ReservoirsKang, Suk Sang 1975- 14 March 2013 (has links)
We propose a rigorous approach for well drainage volume calculations in gas reservoirs based on the flux field derived from dual porosity finite-difference simulation and demonstrate its application to optimize well placement. Our approach relies on a high frequency asymptotic solution of the diffusivity equation and emulates the propagation of a 'pressure front' in the reservoir along gas streamlines. The proposed approach is a generalization of the radius of drainage concept in well test analysis (Lee 1982), which allows us not only to compute rigorously the well drainage volumes as a function of time but also to examine the potential impact of infill wells on the drainage volumes of existing producers. Using these results, we present a systematic approach to optimize well placement to maximize the Estimated Ultimate Recovery.
A history matching algorithm is proposed that sequentially calibrates reservoir parameters from the global-to-local scale considering parameter uncertainty and the resolution of the data. Parameter updates are constrained to the prior geologic heterogeneity and performed parsimoniously to the smallest spatial scales at which they can be resolved by the available data. In the first step of the workflow, Genetic Algorithm is used to assess the uncertainty in global parameters that influence field-scale flow behavior, specifically reservoir energy. To identify the reservoir volume over which each regional multiplier is applied, we have developed a novel approach to heterogeneity segmentation from spectral clustering theory. The proposed clustering can capture main feature of prior model by using second eigenvector of graph affinity matrix.
In the second stage of the workflow, we parameterize the high-resolution heterogeneity in the spectral domain using the Grid Connectivity based Transform to severely compress the dimension of the calibration parameter set. The GCT implicitly imposes geological continuity and promotes minimal changes to each prior model in the ensemble during the calibration process. The field scale utility of the workflow is then demonstrated with the calibration of a model characterizing a structurally complex and highly fractured reservoir.
|
2 |
Clustering users based on the user’s photo library / Gruppering av användare baserat på användarens fotobibliotekBergholm, Marcus January 2018 (has links)
For any user-adaptive system the most important task is to provide the users with what they want and need without them asking for it explicitly. This process can be called personalisation and is done by tailoring the service or product for individual users or user groups. In this thesis, we explore the possibilities to build a model that clusters users based on the user’s photo library. This was to create a better personalised experience within a service called Degoo. The model used to perform the clustering is called Deep Embedding Clustering and was evaluated on several internal indices alongside an automated categorization model to get an indication of what type of images the clusters had. The user clustering was later evaluated based on split-tests running within the Degoo service. The results shows that four out of five clusters had some general indication of types such as vacation photos, clothes, text, and people. The evaluation of the clustering impact on the split-tests shows that we could see patterns that indicated optimal attribute values for certain user clusters. / Det ultimata målet för alla användaranpassade system är att ge användarna det som de behöver utan att de begär det explicit. Denna process kan kallas användaranpassning och görs genom att skräddarsy tjänsten eller produkten för enskilda användare eller användargrupper. I denna avhandling undersöker vi möjligheterna att bygga en modell som grupperar användare baserat på användarnas fotodata. Motivationen bakom detta var att skapa en bättre personlig upplevelse inom en tjänst som heter Degoo. Modellen som används för att utföra grupperingen heter Deep Embedding Clustering och utvärderades på flera interna index tillsammans med en automatiserad kategoriseringsmodell för att få en indikation av vilken typ av bilder grupperna hade. Användargrupperingen utvärderades senare baserat på flera split-test som körs inom Degoo tjänsten. Resultaten visar att fyra av fem grupper hade en allmän indikation på typer som semesterbilder, kläder, text och människor. Utvärderingen av grupperingseffekten på split-testerna visar att vi kunde se mönster som indikerar optimala attributvärden för vissa grupper.
|
3 |
Avaliação do uso de técnicas de agrupamento na busca e recuperação de imagensSilva Filho, Antonio Fernandes da 26 August 2016 (has links)
Submitted by Lara Oliveira (lara@ufersa.edu.br) on 2017-04-24T18:35:56Z
No. of bitstreams: 1
AntonioFSF_DISSERT.pdf: 3029657 bytes, checksum: e9a0a86884868c986d0b42d54a37134a (MD5) / Approved for entry into archive by Vanessa Christiane (referencia@ufersa.edu.br) on 2017-04-26T12:15:52Z (GMT) No. of bitstreams: 1
AntonioFSF_DISSERT.pdf: 3029657 bytes, checksum: e9a0a86884868c986d0b42d54a37134a (MD5) / Approved for entry into archive by Vanessa Christiane (referencia@ufersa.edu.br) on 2017-04-26T12:18:02Z (GMT) No. of bitstreams: 1
AntonioFSF_DISSERT.pdf: 3029657 bytes, checksum: e9a0a86884868c986d0b42d54a37134a (MD5) / Made available in DSpace on 2017-04-26T12:18:11Z (GMT). No. of bitstreams: 1
AntonioFSF_DISSERT.pdf: 3029657 bytes, checksum: e9a0a86884868c986d0b42d54a37134a (MD5)
Previous issue date: 2016-08-26 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Nowadays, almost all services and daily tasks involve some computational apparatus,
leading to creation and further accumulation of data. This progressive amount of data is
an important opportunity of exploration for scientific and commercial branches, which
started to value and to use this information more intense and objectively. Allied to this, the
natural process of public and private life exposure through social networks and electronic
devices tend to generate a significant amount of images that can and should be utilized
with various purposes, such as in public security. In this context, facial recognition has
advanced and attracted specific studies and applications, aiming at the identification of
individuals through parametric features. However, some barriers are still found, making
difficult the efficient performance of the operation, such as the computational cost on
the search time and recovery of large proportions in image databases. Based on this,
this paper proposes the use of clustering algorithms in the organization of image data,
thus providing a direction and “ shortening ” in facial images searches. More specifically,
an analysis related to the optimization is conducted imposed by the use of clustering
techniques applied in the automated organization of images, the preparative step for
performing searches. The proposed method was applied to real facial images databases
and used two clustering algorithms k-means and EM with variations for the similarity
measures (euclidean distance and Pearson correlation). The results show that the use of
clustering in data organization has proved to be efficient, leading to a significant reduction
in search time and without losses in process accuracy / Atualmente quase todos os serviços e tarefas cotidianas envolvem algum aparato
computacional, acarretando a criação e o consequente acúmulo de dados. Essa progressiva
quantidade de dados representa uma importante oportunidade de exploração para os
ramos científico e comercial, que passaram a valorizar e utilizar essas informações de
forma mais intensa e objetiva. Aliado a isso, o processo natural de exposição da
vida pública e privada através das redes sociais e dos dispositivos eletrônicos tende
a gerar uma quantidade expressiva de imagens que podem e devem ser aproveitadas
com os mais diversos fins, como na área de segurança pública. Nesse contexto, o
reconhecimento facial tem avançado e atraído estudos e aplicações específicas, que visam a
identificação de indivíduos por meio de características paramétricas. No entanto, alguns
entraves ainda são encontrados, dificultando a realização eficiente dessa operação, como
o custo computacional relativo ao tempo de busca e recuperação de imagens em bases
de dados de grandes proporções. Baseado nisso, este trabalho propõe a utilização de
algoritmos de agrupamento na organização dos dados de imagens, proporcionando assim
um direcionamento e “encurtamento” nas buscas de imagens faciais. Mais especificamente,
é feita uma análise relacionada à otimização imposta pelo uso de técnicas de agrupamento
aplicadas na organização automatizada das imagens, como etapa preparativa para realização
de buscas. O método proposto foi aplicado em bases de dados de imagens faciais reais, e
utilizou dois algoritmos de agrupamento (k-means e EM) com variações para as medidas de
similaridade (distância euclidiana e correlação de Pearson). Os resultados obtidos revelam
que o emprego do agrupamento na organização dos dados mostrou-se eficiente, levando a
uma redução significativa no tempo de busca, e sem prejuízos na precisão do processo / 2017-04-19
|
4 |
Generating Thematic Maps from Hyperspectral Imagery Using a Bag-of-Materials ModelPark, Kyoung Jin 25 July 2013 (has links)
No description available.
|
5 |
Structural priors for multiobject semi-automatic segmentation of three-dimensional medical images via clustering and graph cut algorithmsKéchichian, Razmig 02 July 2013 (has links) (PDF)
We develop a generic Graph Cut-based semiautomatic multiobject image segmentation method principally for use in routine medical applications ranging from tasks involving few objects in 2D images to fairly complex near whole-body 3D image segmentation. The flexible formulation of the method allows its straightforward adaption to a given application.\linebreak In particular, the graph-based vicinity prior model we propose, defined as shortest-path pairwise constraints on the object adjacency graph, can be easily reformulated to account for the spatial relationships between objects in a given problem instance. The segmentation algorithm can be tailored to the runtime requirements of the application and the online storage capacities of the computing platform by an efficient and controllable Voronoi tessellation clustering of the input image which achieves a good balance between cluster compactness and boundary adherence criteria. Qualitative and quantitative comprehensive evaluation and comparison with the standard Potts model confirm that the vicinity prior model brings significant improvements in the correct segmentation of distinct objects of identical intensity, the accurate placement of object boundaries and the robustness of segmentation with respect to clustering resolution. Comparative evaluation of the clustering method with competing ones confirms its benefits in terms of runtime and quality of produced partitions. Importantly, compared to voxel segmentation, the clustering step improves both overall runtime and memory footprint of the segmentation process up to an order of magnitude virtually without compromising the segmentation quality.
|
6 |
A comparative study on a practical use case for image clustering based on common shareability and metadata / En jämförande studie i ett praktiskt användningsfall för bildklustring baserat på gemensamt delade bilder och dess metadataDackander, Erik January 2018 (has links)
As the amount of data increases every year, the need for effective structuring of data is a growing problem. This thesis aims to investigate and compare how four different clustering algorithms perform on a practical use case for images. The four algorithms used are Affinity Propagation, BIRCH, Rectifying Self-Organizing Maps, Deep Embedded Clustering. The algorithms get the image metadata and also its content, extracted using a pre-trained deep convolutional neural network. The results demonstrate that while there are variations in the data, Affinity Propagation and BIRCH shows the most potential among the four algorithms. Furthermore, when metadata is available it improves the results of the algorithms that can process the extreme values cause. For Affinity Propagation the mean share score is improved by 5.6 percentage points and the silhouette score is improved by 0.044. BIRCH mean share score improves by 1.9 percentage points and silhouette score by 0.051. RSOM and DEC could not process the metadata. / Allt eftersom datamängderna ökar för varje år som går så ökar även behovet av att strukturera datan på en bra sätt. Detta arbete syftar till att undersöka och jämföra hur väl fyra olika klustringsalgoritmer fungerar för ett praktiskt användningsfall med bilder. De fyra algorithmerna som används är Affinity Propagation, BIRCH, Rectifying Self-Organizing Maps och Deep Embedded Clustering. Algoritmerna hade bildernas metadata samt deras innehåll, framtaget med hjälp av ett deep convolutional neural network, att använda för klustringen. Resultaten visar att även om det finns stora variationer i utfallen, visar Affinity Propagation och BIRCH den största potentialen av de fyra algoritmerna. Vidare verkar metadatan, när den finns tillgänglig, förbättra resultaten för de klustringsalgoritmer som kunde hantera de extremvärden som metadatan kunde ge upphov till. För Affinity propagation föbättrades den genomsnittliga delnings poängen med 5,6 procentenheter och dess silhouette index ökade med 0.044. BIRCHs genomsnittliga delnings poäng ökade med 1,9 procentenheter samt dess silhouette index förbättades med 0.051. RSOM och DEC kunde inte processa metadatan.
|
7 |
Towards better privacy preservation by detecting personal events in photos shared within online social networks / Vers une meilleure protection de la vie privée par la détection d'événements dans les photos partagées sur les réseaux sociauxRaad, Eliana 04 December 2015 (has links)
De nos jours, les réseaux sociaux ont considérablement changé la façon dont les personnes prennent des photos qu’importe le lieu, le moment, le contexte. Plus que 500 millions de photos sont partagées chaque jour sur les réseaux sociaux, auxquelles on peut ajouter les 200 millions de vidéos échangées en ligne chaque minute. Plus particulièrement, avec la démocratisation des smartphones, les utilisateurs de réseaux sociaux partagent instantanément les photos qu’ils prennent lors des divers événements de leur vie, leurs voyages, leurs aventures, etc. Partager ce type de données présente un danger pour la vie privée des utilisateurs et les expose ensuite à une surveillance grandissante. Ajouté à cela, aujourd’hui de nouvelles techniques permettent de combiner les données provenant de plusieurs sources entre elles de façon jamais possible auparavant. Cependant, la plupart des utilisateurs des réseaux sociaux ne se rendent même pas compte de la quantité incroyable de données très personnelles que les photos peuvent renfermer sur eux et sur leurs activités (par exemple, le cas du cyberharcèlement). Cela peut encore rendre plus difficile la possibilité de garder l’anonymat sur Internet dans de nombreuses situations où une certaine discrétion est essentielle (politique, lutte contre la fraude, critiques diverses, etc.).Ainsi, le but de ce travail est de fournir une mesure de protection de la vie privée, visant à identifier la quantité d’information qui permettrait de ré-identifier une personne en utilisant ses informations personnelles accessibles en ligne. Premièrement, nous fournissons un framework capable de mesurer le risque éventuel de ré-identification des personnes et d’assainir les documents multimédias destinés à être publiés et partagés. Deuxièmement, nous proposons une nouvelle approche pour enrichir le profil de l’utilisateur dont on souhaite préserver l’anonymat. Pour cela, nous exploitons les évènements personnels à partir des publications des utilisateurs et celles partagées par leurs contacts sur leur réseau social. Plus précisément, notre approche permet de détecter et lier les évènements élémentaires des personnes en utilisant les photos (et leurs métadonnées) partagées au sein de leur réseau social. Nous décrivons les expérimentations que nous avons menées sur des jeux de données réelles et synthétiques. Les résultats montrent l’efficacité de nos différentes contributions. / Today, social networking has considerably changed why people are taking pictures all the time everywhere they go. More than 500 million photos are uploaded and shared every day, along with more than 200 hours of videos every minute. More particularly, with the ubiquity of smartphones, social network users are now taking photos of events in their lives, travels, experiences, etc. and instantly uploading them online. Such public data sharing puts at risk the users’ privacy and expose them to a surveillance that is growing at a very rapid rate. Furthermore, new techniques are used today to extract publicly shared data and combine it with other data in ways never before thought possible. However, social networks users do not realize the wealth of information gathered from image data and which could be used to track all their activities at every moment (e.g., the case of cyberstalking). Therefore, in many situations (such as politics, fraud fighting and cultural critics, etc.), it becomes extremely hard to maintain individuals’ anonymity when the authors of the published data need to remain anonymous.Thus, the aim of this work is to provide a privacy-preserving constraint (de-linkability) to bound the amount of information that can be used to re-identify individuals using online profile information. Firstly, we provide a framework able to quantify the re-identification threat and sanitize multimedia documents to be published and shared. Secondly, we propose a new approach to enrich the profile information of the individuals to protect. Therefore, we exploit personal events in the individuals’ own posts as well as those shared by their friends/contacts. Specifically, our approach is able to detect and link users’ elementary events using photos (and related metadata) shared within their online social networks. A prototype has been implemented and several experiments have been conducted in this work to validate our different contributions.
|
8 |
Fault Detection and Diagnosis for Automotive Camera using Unsupervised Learning / Feldetektering och Diagnostik för Bilkamera med Oövervakat LärandeLi, Ziyou January 2023 (has links)
This thesis aims to investigate a fault detection and diagnosis system for automotive cameras using unsupervised learning. 1) Can a front-looking wide-angle camera image dataset be created using Hardware-in-Loop (HIL) simulations? 2) Can an Adversarial Autoencoder (AAE) based unsupervised camera fault detection and diagnosis method be crafted for SPA2 Vehicle Control Unit (VCU) using an image dataset created using Hardware-inLoop? 3) Does using AAE surpass the performance of using Variational Autoencoder (VAE) for the unsupervised automotive camera fault diagnosis model? In the field of camera fault studies, automotive cameras stand out for its complex operational context, particularly in Advanced Driver-Assistance Systems (ADAS) applications. The literature review finds a notable gap in comprehensive image datasets addressing the image artefact spectrum of ADAS-equipped automotive cameras under real-world driving conditions. In this study, normal and fault scenarios for automotive cameras are defined leveraging published and company studies and a fault diagnosis model using unsupervised learning is proposed and examined. The types of image faults defined and included are Lens Flare, Gaussian Noise and Dead Pixels. Along with normal driving images, a balanced fault-injected image dataset is collected using real-time sensor simulation under driving scenario with industrially-recognised HIL setup. An AAE-based unsupervised automotive camera fault diagnosis system using VGG16 as encoder-decoder structure is proposed and experiments on its performance are conducted on both the selfcollected dataset and fault-injected KITTI raw images. For non-processed KITTI dataset, morphological operations are examined and are employed as preprocessing. The performance of the system is discussed in comparison to supervised and unsupervised image partition methods in related works. The research found that the AAE method outperforms popular VAE method, using VGG16 as encoder-decoder structure significantly using 3-layer Convolutional Neural Network (CNN) and ResNet18 and morphological preprocessings significantly ameliorate system performance. The best performing VGG16- AAE model achieves 62.7% accuracy to diagnosis on own dataset, and 86.4% accuracy on double-erosion-processed fault-injected KITTI dataset. In conclusion, this study introduced a novel scheme for collecting automotive sensor data using Hardware-in-Loop, utilised preprocessing techniques that enhance image partitioning and examined the application of unsupervised models for diagnosing faults in automotive cameras. / Denna avhandling syftar till att undersöka ett felupptäcknings- och diagnossystem för bilkameror med hjälp av oövervakad inlärning. De huvudsakliga forskningsfrågorna är om en bilduppsättning från en frontmonterad vidvinkelkamera kan skapas med hjälp av Hardware-in-Loop (HIL)-simulationer, om en Adversarial Autoencoder (AAE)-baserad metod för oövervakad felupptäckt och diagnos för SPA2 Vehicle Control Unit (VCU) kan utformas med en bilduppsättning skapad med Hardware-in-Loop, och om användningen av AAE skulle överträffa prestandan av att använda Variational Autoencoder (VAE) för den oövervakade modellen för felanalys i bilkameror. Befintliga studier om felanalys fokuserar på roterande maskiner, luftbehandlingsenheter och järnvägsfordon. Få studier undersöker definitionen av feltyper i bilkameror och klassificerar normala och felaktiga bilddata från kameror i kommersiella passagerarfordon. I denna studie definieras normala och felaktiga scenarier för bilkameror och en modell för felanalys med oövervakad inlärning föreslås och undersöks. De typer av bildfel som definieras är Lens Flare, Gaussiskt brus och Döda pixlar. Tillsammans med normala bilder samlas en balanserad uppsättning felinjicerade bilder in med hjälp av realtidssensor-simulering under körscenarier med industriellt erkänd HIL-uppsättning. Ett AAE-baserat system för oövervakad felanalys i bilkameror med VGG16 som kodaredekoderstruktur föreslås och experiment på dess prestanda genomförs både på den självinsamlade uppsättningen och felinjicerade KITTI-raw-bilder. För icke-behandlade KITTI-uppsättningar undersöks morfologiska operationer och används som förbehandling. Systemets prestanda diskuteras i jämförelse med övervakade och oövervakade bildpartitioneringsmetoder i relaterade arbeten. Forskningen fann att AAE-metoden överträffar den populära VAEmetoden, genom att använda VGG16 som kodare-dekoderstruktur signifikant med ett 3-lagers konvolutionellt neuralt nätverk (CNN) och ResNet18 och morfologiska förbehandlingar förbättrar systemets prestanda avsevärt. Den bäst presterande VGG16-AAE-modellen uppnår 62,7 % noggrannhet för diagnos på egen uppsättning, och 86,4 % noggrannhet på dubbelerosionsbehandlad felinjicerad KITTI-uppsättning. Sammanfattningsvis introducerade denna studie ett nytt system för insamling av data från bilsensorer med Hardware-in-Loop, utnyttjade förbehandlingstekniker som förbättrar bildpartitionering och undersökte tillämpningen av oövervakade modeller för att diagnostisera fel i bilkameror.
|
9 |
Structural priors for multiobject semi-automatic segmentation of three-dimensional medical images via clustering and graph cut algorithms / A priori de structure pour la segmentation multi-objet d'images médicales 3d par partition d'images et coupure de graphesKéchichian, Razmig 02 July 2013 (has links)
Nous développons une méthode générique semi-automatique multi-objet de segmentation d'image par coupure de graphe visant les usages médicaux de routine, allant des tâches impliquant quelques objets dans des images 2D, à quelques dizaines dans celles 3D quasi corps entier. La formulation souple de la méthode permet son adaptation simple à une application donnée. En particulier, le modèle d'a priori de proximité que nous proposons, défini à partir des contraintes de paires du plus court chemin sur le graphe d'adjacence des objets, peut facilement être adapté pour tenir compte des relations spatiales entre les objets ciblés dans un problème donné. L'algorithme de segmentation peut être adapté aux besoins de l'application en termes de temps d'exécution et de capacité de stockage à l'aide d'une partition de l'image à segmenter par une tesselation de Voronoï efficace et contrôlable, établissant un bon équilibre entre la compacité des régions et le respect des frontières des objets. Des évaluations et comparaisons qualitatives et quantitatives avec le modèle de Potts standard confirment que notre modèle d'a priori apporte des améliorations significatives dans la segmentation d'objets distincts d'intensités similaires, dans le positionnement précis des frontières des objets ainsi que dans la robustesse de segmentation par rapport à la résolution de partition. L'évaluation comparative de la méthode de partition avec ses concurrentes confirme ses avantages en termes de temps d'exécution et de qualité des partitions produites. Par comparaison avec l'approche appliquée directement sur les voxels de l'image, l'étape de partition améliore à la fois le temps d'exécution global et l'empreinte mémoire du processus de segmentation jusqu'à un ordre de grandeur, sans compromettre la qualité de la segmentation en pratique. / We develop a generic Graph Cut-based semiautomatic multiobject image segmentation method principally for use in routine medical applications ranging from tasks involving few objects in 2D images to fairly complex near whole-body 3D image segmentation. The flexible formulation of the method allows its straightforward adaption to a given application.\linebreak In particular, the graph-based vicinity prior model we propose, defined as shortest-path pairwise constraints on the object adjacency graph, can be easily reformulated to account for the spatial relationships between objects in a given problem instance. The segmentation algorithm can be tailored to the runtime requirements of the application and the online storage capacities of the computing platform by an efficient and controllable Voronoi tessellation clustering of the input image which achieves a good balance between cluster compactness and boundary adherence criteria. Qualitative and quantitative comprehensive evaluation and comparison with the standard Potts model confirm that the vicinity prior model brings significant improvements in the correct segmentation of distinct objects of identical intensity, the accurate placement of object boundaries and the robustness of segmentation with respect to clustering resolution. Comparative evaluation of the clustering method with competing ones confirms its benefits in terms of runtime and quality of produced partitions. Importantly, compared to voxel segmentation, the clustering step improves both overall runtime and memory footprint of the segmentation process up to an order of magnitude virtually without compromising the segmentation quality.
|
Page generated in 0.1078 seconds