• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8
  • 3
  • 3
  • 2
  • 1
  • Tagged with
  • 19
  • 19
  • 5
  • 5
  • 5
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Microtomographie X de matériaux à comportement pseudo-fragile : Identification du réseau de fissures / X-ray microtomography of materials to brittle-like behavior : Identification of the crack network

Hauss, Grégory 06 December 2012 (has links)
L'étude de l'endommagement des matériaux à comportement pseudo-fragile fait l'objet denombreuses études et la caractérisation du réseau de fissures constitue une étape nécessairepour une meilleure compréhension de leur comportement. L'objectif principal est ici d'identifierde manière la plus fine possible cet espace fissuré en trois dimensions grâce à la techniqued'imagerie nommée microtomographie X. Pour ce faire, une machine d'essai in-situ a étédéveloppée et une procédure d'analyse des images 3D a été validée. L'objectif du dispositif insituest de maintenir l'échantillon dans différents états fissurés pour rendre possible lesacquisitions microtomographiques. Une fois les images 3D reconstruites, la procédure detraitement est appliquée et l'espace fissuré est identifié. Des mesures sont alors réalisées surl'évolution du réseau de fissures au cours de l'endommagement. Ce travail constitue la premièreétape d'un traitement plus général qui a pour objectif de simuler numériquement lecomportement mécanique de ces matériaux en se basant sur leur géométrie réelle. / Materials displaying a pseudo-brittle behavior have been well studied over the past decade andthe characterization of the cracks network has become nowadays an important step for theunderstanding of their damaging behavior. The aim of this work is to characterize, in the finestavailable way, this crack space in 3D using X-ray computed microtomography. This wasachieved: 1) by designing an in-situ compressive device which maintains a sample in a crackedstate during microtomographic data acquisition and, 2) by processing the images with relevantimage filtering techniques for a better cracks network characterization. Two parameters ofchoice are then measured: the cracks network surface and volume. This work is the first step ofa global procedure which aims to numerically model the mechanical behavior of pseudo-brittlematerials by using real 3D crack geometry.
12

Le morphisme déterminant pour les espaces de modules de groupes p-divisibles / The determinant morphism for the moduli spaces of p-divisible groups

Chen, Miaofen 11 May 2011 (has links)
Soit \M un espace de modules de groupes p-divisibles introduit par Rapoport et Zink. Supposons que cet espace \M soit non-ramifié de type EL ou PEL unitaire ou symplectique. Soit \Mrig la fibre générique de Berthelot de \M. C'est un espace rigide analytique au-dessus duquel il existe une tour de revêtements étales finis (\M_K)_K qui classifient les structures de niveau. On définit un morphisme déterminant \det_K de la tour (\M_K)_K vers une tour d'espaces rigides analytiques étales de dimension 0 associée au cocentre du groupe réductif relié à cet espace. C'est un analogue local en des places non-archimédiennes du morphisme déterminant pour les variétés de Shimura défini par Deligne. Comme pour les variétés de Shimura, on montre que les fibres géométriques du morphisme déterminant \det_K sont les composantes connexes géométriques de \M_K. On définit aussi les morphismes puissances extérieures qui généralisent le morphisme déterminant sur la tour d'espaces rigides analytiques associée à un espace de Lubin-Tate. / Let \M be a moduli space of p-divisible groups introduced by Rapoport and Zink. Assume that \M is unramified of EL or PEL type which is unitary or symplectic. Let \Mrig be the generic fiber of Berthelot of \M. This is a rigid analytic space over which there exist a tower of finite etale coverings (\M_K)_K classifing the level structures. We define a determinant morphism \det_K from the tower (\M_K)_K to a tower of rigid analytic spaces of dimension 0 associated to the cocenter of the reductive group related to the space \M. This is a local analogue on the nonarchimedean places of the determinant morphism for Shimura varieties defined by Deligne. As for Shimura varieties, we prove that the geometric fibers of the determinant morphism \det_K are the geometrically connected components of \M_K. We define also the exterior power morphisms which generalize the determinant morphism on the tower of rigid analytic spaces associated to a Lubin-Tate space.
13

Mining Tera-Scale Graphs: Theory, Engineering and Discoveries

Kang, U 01 May 2012 (has links)
How do we find patterns and anomalies, on graphs with billions of nodes and edges, which do not fit in memory? How to use parallelism for such Tera- or Peta-scale graphs? In this thesis, we propose PEGASUS, a large scale graph mining system implemented on the top of the HADOOP platform, the open source version of MAPREDUCE. PEGASUS includes algorithms which help us spot patterns and anomalous behaviors in large graphs. PEGASUS enables the structure analysis on large graphs. We unify many different structure analysis algorithms, including the analysis on connected components, PageRank, and radius/diameter, into a general primitive called GIM-V. GIM-V is highly optimized, achieving good scale-up on the number of edges and available machines. We discover surprising patterns using GIM-V, including the 7-degrees of separation in one of the largest publicly available Web graphs, with 7 billion edges. PEGASUS also enables the inference and the spectral analysis on large graphs. We design an efficient distributed belief propagation algorithm which infer the states of unlabeled nodes given a set of labeled nodes. We also develop an eigensolver for computing top k eigenvalues and eigenvectors of the adjacency matrices of very large graphs. We use the eigensolver to discover anomalous adult advertisers in the who-follows-whom Twitter graph with 3 billion edges. In addition, we develop an efficient tensor decomposition algorithm and use it to analyze a large knowledge base tensor. Finally, PEGASUS allows the management of large graphs. We propose efficient graph storage and indexing methods to answer graph mining queries quickly. We also develop an edge layout algorithm for better compressing graphs.
14

Connected component tree construction for embedded systems / Contruction d'arbre des composantes connexes pour les systèmes embarqués

Matas, Petr 30 June 2014 (has links)
L'objectif du travail présenté dans cette thèse est de proposer un avancement dans la construction des systèmes embarqués de traitement d'images numériques, flexibles et puissants. La proposition est d'explorer l'utilisation d'une représentation d'image particulière appelée « arbre des composantes connexes » (connected component tree – CCT) en tant que base pour la mise en œuvre de l'ensemble de la chaîne de traitement d'image. Cela est possible parce que la représentation par CCT est à la fois formelle et générale. De plus, les opérateurs déjà existants et basés sur CCT recouvrent tous les domaines de traitement d'image : du filtrage de base, passant par la segmentation jusqu'à la reconnaissance des objets. Une chaîne de traitement basée sur la représentation d'image par CCT est typiquement composée d'une cascade de transformations de CCT où chaque transformation représente un opérateur individuel. A la fin, une restitution d'image pour visualiser les résultats est nécessaire. Dans cette chaîne typique, c'est la construction du CCT qui représente la tâche nécessitant le plus de temps de calcul et de ressources matérielles. C'est pour cette raison que ce travail se concentre sur la problématique de la construction rapide de CCT. Dans ce manuscrit, nous introduisons le CCT et ses représentations possibles dans la mémoire de l'ordinateur. Nous présentons une partie de ses applications et analysons les algorithmes existants de sa construction. Par la suite, nous proposons un nouvel algorithme de construction parallèle de CCT qui produit le « parent point tree » représentation de CCT. L'algorithme est conçu pour les systèmes embarqués, ainsi notre effort vise la minimisation de la mémoire occupée. L'algorithme en lui-même se compose d'un grand nombre de tâches de la « construction » et de la « fusion ». Une tâche de construction construit le CCT d'une seule ligne d'image, donc d'un signal à une dimension. Les tâches de fusion construisent progressivement le CCT de l'ensemble. Pour optimiser la gestion des ressources de calcul, trois différentes stratégies d'ordonnancement des tâches sont développées et évaluées. Également, les performances des implantations de l'algorithme sont évaluées sur plusieurs ordinateurs parallèles. Un débit de 83 Mpx/s pour une accélération de 13,3 est réalisé sur une machine 16-core avec Opteron 885 processeurs. Les résultats obtenus nous ont encouragés pour procéder à une mise en œuvre d'une nouvelle implantation matérielle parallèle de l'algorithme. L'architecture proposée contient 16 blocs de base, chacun dédié à la transformation d'une partie de l'image et comprenant des unités de calcul et la mémoire. Un système spécial d'interconnexions est conçu pour permettre à certaines unités de calcul d'accéder à la mémoire partagée dans d'autres blocs de base. Ceci est nécessaire pour la fusion des CCT partiels. L'architecture a été implantée en VHDL et sa simulation fonctionnelle permet d'estimer une performance de 145 Mpx/s à fréquence d'horloge de 120 MHz / The aim of this work is to enable construction of embedded digital image processing systems, which are both flexible and powerful. The thesis proposal explores the possibility of using an image representation called connected component tree (CCT) as the basis for implementation of the entire image processing chain. This is possible, because the CCT is both simple and general, as CCT-based implementations of operators spanning from filtering to segmentation and recognition exist. A typical CCT-based image processing chain consists of CCT construction from an input image, a cascade of CCT transformations, which implement the individual operators, and image restitution, which generates the output image from the modified CCT. The most time-demanding step is the CCT construction and this work focuses on it. It introduces the CCT and its possible representations in computer memory, shows some of its applications and analyzes existing CCT construction algorithms. A new parallel CCT construction algorithm producing the parent point tree representation of the CCT is proposed. The algorithm is suitable for an embedded system implementation due to its low memory requirements. The algorithm consists of many building and merging tasks. A building task constructs the CCT of a single image line, which is treated as a one-dimensional signal. Merging tasks fuse the CCTs together. Three different task scheduling strategies are developed and evaluated. Performance of the algorithm is evaluated on multiple parallel computers. A throughput 83 Mpx/s at speedup 13.3 is achieved on a 16-core machine with Opteron 885 CPUs. Next, the new algorithm is further adapted for hardware implementation and implemented as a new parallel hardware architecture. The architecture contains 16 basic blocks, each dedicated to processing of an image partition and consisting of execution units and memory. A special interconnection switch is designed to allow some executions units to access memory in other basic blocks. The algorithm requires this for the final merging of the CCTs constructed by different basic blocks together. The architecture is implemented in VHDL and its functional simulation shows performance 145 Mpx/s at clock frequency 120 MHz
15

Logo detection, recognition and spotting in context by matching local visual features / Détection, reconnaissance et localisation de logo dans un contexte avec appariement de caractéristiques visuelles locales

Le, Viet Phuong 08 December 2015 (has links)
Cette thèse présente un framework pour le logo spotting appliqué à repérer les logos à partir de l’image des documents en se concentrant sur la catégorisation de documents et les problèmes de récupération de documents. Nous présentons également trois méthodes de matching par point clé : le point clé simple avec le plus proche voisin, le matching par règle des deux voisins les plus proches et le matching par deux descripteurs locaux à différents étapes de matching. Les deux derniers procédés sont des améliorations de la première méthode. En outre, utiliser la méthode de classification basée sur la densité pour regrouper les correspondances dans le framework proposé peut aider non seulement à segmenter la région candidate du logo mais également à rejeter les correspondances incorrectes comme des valeurs aberrantes. En outre, afin de maximiser la performance et de localiser les logos, un algorithme à deux étages a été proposé pour la vérification géométrique basée sur l’homographie avec RANSAC. Comme les approches fondées sur le point clé supposent des approches coûteuses, nous avons également investi dans l’optimisation de notre framework. Les problèmes de séparation de texte/graphique sont étudiés. Nous proposons une méthode de segmentation de texte et non-texte dans les images de documents basée sur un ensemble de fonctionnalités puissantes de composants connectés. Nous avons appliqué les techniques de réduction de dimensionnalité pour réduire le vecteur de descripteurs locaux de grande dimension et rapprocher les algorithmes de recherche du voisin le plus proche pour optimiser le framework. En outre, nous avons également mené des expériences pour un système de récupération de documents sur les documents texte et non-texte segmentés et l'algorithme ANN. Les résultats montrent que le temps de calcul du système diminue brusquement de 56% tandis que la précision diminue légèrement de près de 2,5%. Dans l'ensemble, nous avons proposé une approche efficace et efficiente pour résoudre le problème de spotting des logos dans les images de documents. Nous avons conçu notre approche pour être flexible pour des futures améliorations. Nous croyons que notre travail peut être considéré comme une étape sur la voie pour résoudre le problème de l’analyse complète et la compréhension des images de documents. / This thesis presents a logo spotting framework applied to spotting logo images on document images and focused on document categorization and document retrieval problems. We also present three key-point matching methods: simple key-point matching with nearest neighbor, matching by 2-nearest neighbor matching rule method and matching by two local descriptors at different matching stages. The last two matching methods are improvements of the first method. In addition, using a density-based clustering method to group the matches in our proposed spotting framework can help not only segment the candidate logo region but also reject the incorrect matches as outliers. Moreover, to maximize the performance and to locate logos, an algorithm with two stages is proposed for geometric verification based on homography with RANSAC. Since key-point-based approaches assume costly approaches, we have also invested to optimize our proposed framework. The problems of text/graphics separation are studied. We propose a method for segmenting text and non-text in document images based on a set of powerful connected component features. We applied dimensionality reduction techniques to reduce the high dimensional vector of local descriptors and approximate nearest neighbor search algorithms to optimize our proposed framework. In addition, we have also conducted experiments for a document retrieval system on the text and non-text segmented documents and ANN algorithm. The results show that the computation time of the system decreases sharply by 56% while its accuracy decreases slightly by nearly 2.5%. Overall, we have proposed an effective and efficient approach for solving the problem of logo spotting in document images. We have designed our approach to be flexible for future improvements by us and by other researchers. We believe that our work could be considered as a step in the direction of solving the problem of complete analysis and understanding of document images.
16

Connectivity of channelized sedimentary bodies : analysis and simulation strategies in subsurface modeling / Connectivité de corps sédimentaires chenalisés : stratégies d’analyse et de simulation en modélisation de subsurface

Rongier, Guillaume 15 March 2016 (has links)
Les chenaux sont des structures sédimentaires clefs dans le transport et le dépôt de sédiments depuis les continents jusqu'aux planchers océaniques. Leurs dépôts perméables permettent la circulation et le stockage de fluides. Comme illustré avec les systèmes turbiditiques, le remplissage de ces chenaux est très hétérogène. Son impact sur la connectivité des dépôts perméables est amplifié par les variations d'organisation spatiale des chenaux. Mais du fait de l'aspect lacunaire des données, l'architecture de ces structures souterraines n'est que partiellement connue. Dans ce cas, les simulations stochastiques permettent d'estimer les ressources et les incertitudes associées. De nombreuses méthodes ont été développées pour reproduire ces environnements. Elles soulèvent deux questions capitales : comment analyser et comparer la connectivité de simulations stochastiques ? Comment améliorer la représentation de la connectivité dans les simulations stochastiques de chenaux et réduire les incertitudes ? La première question nous a conduits à développer une méthode pour comparer objectivement des réalisations en se concentrant sur la connectivité. L'approche proposée s'appuie sur les composantes connexes des simulations, sur lesquelles sont calculés plusieurs indicateurs. Une représentation par positionnement multidimensionnel (MDS) facilite la comparaison des réalisations. Les observations faites grâce au MDS sont ensuite validées par une carte de chaleur et les indicateurs. L'application à un cas synthétique de complexes chenaux/levées montre les différences de connectivité entre des méthodes et des valeurs de paramètres différentes. En particulier, certaines méthodes sont loin de reproduire des objets avec une forme de chenaux. La seconde question amène deux principaux problèmes. Premièrement, il apparaît difficile de conditionner des objets très allongés, comme des chenaux, à des données de puits ou dérivées de données sismiques. Nous nous appuyons sur une grammaire formelle, le système de Lindenmayer, pour simuler stochastiquement des objets chenaux conditionnés. Des règles de croissance prédéfinies contrôlent la morphologie du chenal, de rectiligne à sinueuse. Cette morphologie conditionne les données au fur et à mesure de son développement grâce à des contraintes attractives ou répulsives. Ces contraintes assurent le conditionnement tout en préservant au mieux la morphologie. Deuxièmement, l'organisation spatiale des chenaux apparaît peu contrôlable. Nous proposons de traiter ce problème en intégrant les processus qui déterminent l'organisation des chenaux. Un premier chenal est simulé avec un système de Lindenmayer. Puis ce chenal migre à l'aide d'une simulation gaussienne séquentielle ou d'une simulation multipoints. Cette approche reproduit les relations complexes entre des chenaux successifs sans s'appuyer sur des modèles physiques partiellement validés et au paramétrage souvent contraignant. L'application de ces travaux à des cas synthétiques démontre le potentiel de ces approches. Elles ouvrent des perspectives intéressantes pour mieux prendre en compte la connectivité dans les simulations stochastiques de chenaux / Channels are the main sedimentary structures for sediment transportation and deposition from the continents to the ocean floor. The significant permeability of their deposits enables fluid circulation and storage. As illustrated with turbiditic systems, those channel fill is highly heterogeneous. Combined with the spatial organization of the channels, this impacts significantly the connectivity between the permeable deposits. The scarcity of the field data involves an incomplete knowledge of these subsurface reservoir architectures. In such environments, stochastic simulations are used to estimate the resources and give an insight on the associated uncertainties. Several methods have been developed to reproduce these complex environments. They raise two main concerns: how to analyze and compare the connectivity of a set of stochastic simulations? How to improve the representation of the connectivity within stochastic simulations of channels and reduce the uncertainties? The first concern leads to the development of a method to objectively compare realiza-tions in terms of connectivity. The proposed approach relies on the connected compo-nents of the simulations, on which several indicators are computed. A muldimensional scaling (MDS) representation facilitates the realization comparison. The observations on the MDS are then validated by the analysis of the heatmap and the indicators. The appli-cation to a synthetic case study highlights the difference of connectivity between several methods and parameter values to model channel/levee complexes. In particular, some methods are far from representing channel-shaped bodies. Two main issues derive from the second concern. The first issue is the difficulty to simulate a highly elongated object, here a channel, conditioned to well or seismic-derived data. We rely on a formal grammar, the Lindenmayer system, to stochastically simulate conditional channel objects. Predefined growth rules control the channel morphology to simulate straight to sinuous channels. That morphology conditions the data during its development thanks to attractive and repulsive constraints. Such constraints ensure the conditioning while preserving at best the channel morphology. The second issue arises from the limited control on the channel organization. This aspect is addressed by taking into account the evolution processes underlying channel organization. An initial channel is simulated with a Lindenmayer system. Then that channel migrates using sequential Gaussian simulation or multiple-point simulation. This process reproduces the complex relationships between successive channels without relying on partially validated physical models with an often constraining parameterization. The applications of those various works to synthetic cases highlight the potentiality of the approaches. They open up interesting prospects to better take into account the connectivity when stochastically simulating channels
17

A Real-Time Computational Decision Support System for Compounded Sterile Preparations using Image Processing and Artificial Neural Networks

Regmi, Hem Kanta January 2016 (has links)
No description available.
18

Camera-Captured Document Image Analysis

Kasar, Thotreingam 11 1900 (has links) (PDF)
Text is no longer confined to scanned pages and often appears in camera-based images originating from text on real world objects. Unlike the images from conventional flatbed scanners, which have a controlled acquisition environment, camera-based images pose new challenges such as uneven illumination, blur, poor resolution, perspective distortion and 3D deformations that can severely affect the performance of any optical character recognition (OCR) system. Due to the variations in the imaging condition as well as the target document type, traditional OCR systems, designed for scanned images, cannot be directly applied to camera-captured images and a new level of processing needs to be addressed. In this thesis, we study some of the issues commonly encountered in camera-based image analysis and propose novel methods to overcome them. All the methods make use of color connected components. 1. Connected component descriptor for document image mosaicing Document image analysis often requires mosaicing when it is not possible to capture a large document at a reasonable resolution in a single exposure. Such a document is captured in parts and mosaicing stitches them into a single image. Since connected components (CCs) in a document image can easily be extracted regardless of the image rotation, scale and perspective distortion, we design a robust feature named connected component descriptor that is tailored for mosaicing camera-captured document images. The method involves extraction of a circular measurement region around each CC and its description using the angular radial transform (ART). To ensure geometric consistency during feature matching, the ART coefficients of a CC are augmented with those of its 2 nearest neighbors. Our method addresses two critical issues often encountered in correspondence matching: (i) the stability of features and (ii) robustness against false matches due to multiple instances of many characters in a document image. We illustrate the effectiveness of the proposed method on camera-captured document images exhibiting large variations in viewpoint, illumination and scale. 2. Font and background color independent text binarization The first step in an OCR system, after document acquisition, is binarization, which converts a gray-scale/color image into a two-level image -the foreground text and the background. We propose two methods for binarization of color documents whereby the foreground text is output as black and the background as white regardless of the polarity of foreground-background shades. (a) Hierarchical CC Analysis: The method employs an edge-based connected component approach and automatically determines a threshold for each component. It overcomes several limitations of existing locally-adaptive thresholding techniques. Firstly, it can handle documents with multi-colored texts with different background shades. Secondly, the method is applicable to documents having text of widely varying sizes, usually not handled by local binarization methods. Thirdly, the method automatically computes the threshold for binarization and the logic for inverting the output from the image data and does not require any input parameter. However, the method is sensitive to complex backgrounds since it relies on the edge information to identify CCs. It also uses script-specific characteristics to filter out edge components before binarization and currently works well for Roman script only. (b) Contour-based color clustering (COCOCLUST): To overcome the above limitations, we introduce a novel unsupervised color clustering approach that operates on a ‘small’ representative set of color pixels identified using the contour information. Based on the assumption that every character is of a uniform color, we analyze each color layer individually and identify potential text regions for binarization. Experiments on several complex images having large variations in font, size, color, orientation and script illustrate the robustness of the method. 3. Multi-script and multi-oriented text extraction from scene images Scene text understanding normally involves a pre-processing step of text detection and extraction before subjecting the acquired image for character recognition task. The subsequent recognition task is performed only on the detected text regions so as to mitigate the effect of background complexity. We propose a color-based CC labeling for robust text segmentation from natural scene images. Text CCs are identified using a combination of support vector machine and neural network classifiers trained on a set of low-level features derived from the boundary, stroke and gradient information. We develop a semiautomatic annotation toolkit to generate pixel-accurate groundtruth of 100 scenic images containing text in various layout styles and multiple scripts. The overall precision, recall and f-measure obtained on our dataset are 0.8, 0.86 and 0.83, respectively. The proposed method is also compared with others in the literature using the ICDAR 2003 robust reading competition dataset, which, however, has only horizontal English text. The overall precision, recall and f-measure obtained are 0.63, 0.59 and 0.61 respectively, which is comparable to the best performing methods in the ICDAR 2005 text locating competition. A recent method proposed by Epshtein et al. [1] achieves better results but it cannot handle arbitrarily oriented text. Our method, however, works well for generic scene images having arbitrary text orientations. 4. Alignment of curved text lines Conventional OCR systems perform poorly on document images that contain multi-oriented text lines. We propose a technique that first identifies individual text lines by grouping adjacent CCs based on their proximity and regularity. For each identified text string, a B-spline curve is fitted to the centroids of the constituent characters and normal vectors are computed along the fitted curve. Each character is then individually rotated such that the corresponding normal vector is aligned with the vertical axis. The method has been tested on a data set consisting of 50 images with text laid out in various ways namely along arcs, waves, triangles and a combination of these with linearly skewed text lines. It yields 95.9% recognition accuracy on text strings, where, before alignment, state-of-the-art OCRs fail to recognize any text. The CC-based pre-processing algorithms developed are well-suited for processing camera-captured images. We demonstrate the feasibility of the algorithms on the publicly-available ICDAR 2003 robust reading competition dataset and our own database comprising camera-captured document images that contain multiple scripts and arbitrary text layouts.
19

Methods for Multisensory Detection of Light Phenomena on the Moon as a Payload Concept for a Nanosatellite Mission

Maurer, Andreas January 2020 (has links)
For 500 years transient light phenomena (TLP) have been observed on the lunar surface by ground-based observers. The actual physical reason for most of these events is today still unknown. Current plans of NASA and SpaceX to send astronauts back to the Moon and already successful deep-space CubeSat mission will allow in the future research nanosatellite missions to the cislunar space. This thesis presents a new hardware and software concept for a future payload on such a nanosatellite. The main task was to develop and implement a high-performance image processing algorithm which task is to detect short brightening flashes on the lunar surface. Based on a review of historic reported phenomena, possible explanation theories for these phenomena and currently active and planed ground- or space-based observatories possible reference scenarios were analyzed. From the presented scenarios one, the detection of brightening events was chosen and requirements for this scenario stated. Afterwards, possible detectors, processing computers and image processing algorithms were researched and compared regarding the specified requirements. This analysis of available algorithm was used to develop a new high-performance detection algorithm to detect transient brightening events on the Moon. The implementation of this algorithm running on the processor and the internal GPU of a MacMini achieved a framerate of 55 FPS by processing images with a resolution of 4.2 megapixel. Its functionality and performance was verified on the remote telescope operated by the Chair of Space Technology of the University of Würzburg. Furthermore, the developed algorithm was also successfully ported on the Nvidia Jetson Nano and its performance compared with a FPGA based image processing algorithm. The results were used to chose a FPGA as the main processing computer of the payload. This concept uses two backside illuminated CMOS image sensor connected to a single FPGA. On the FPGA the developed image processing algorithm should be implemented. Further work is required to realize the proposed concept in building the actual hardware and porting the developed algorithm onto this platform.

Page generated in 0.0546 seconds