• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 179
  • 30
  • 14
  • 10
  • 8
  • 5
  • 5
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 305
  • 305
  • 77
  • 62
  • 58
  • 46
  • 44
  • 42
  • 40
  • 35
  • 34
  • 32
  • 30
  • 29
  • 27
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

Projeto de uma arquitetura dedicada à compressão de imagens no padrão JPEG2000 / Design of a dedicated architecture to Image compression in the JPEG2000 Standard

Silva, Sandro Vilela da January 2005 (has links)
O incremento das taxas de transmissão e de armazenamento demanda o desenvolvimento de técnicas para aumentar a taxa de compressão de imagens e ao mesmo tempo mantenha a qualidade destas imagens. O padrão JPEG2000 propõe a utilização da transformada wavelet discreta e codificação aritmética para alcançar altos graus de compressão, proporcionando que a imagem resultante tenha qualidade razoável. Este padrão permite tanto compressão com perdas como compressão sem perdas, dependendo apenas do tipo de transformada wavelet utilizada. Este trabalho propõe a implementação de blocos internos em hardware para compor um compressor de imagens com perdas seguindo o padrão JPEG2000. O principal componente deste compressor de imagens é a transformada wavelet discreta irreversível em duas dimensões, que é implementada utilizando um esquema lifting a partir dos coeficientes Daubechies 9/7 descritos na literatura. Para proporcionar altas taxas de compressão para a transformada irreversível, são utilizados coeficientes reais – que são originalmente propostos em representação de ponto-flutuante. Neste trabalho, estes coeficientes foram implementados em formato de ponto-fixo arredondado, o que resulta erros que foram estimados e controlados. Neste trabalho, várias arquiteturas em hardware para a descrição da transformada wavelet discreta irreversível em duas dimensões foram implementadas para avaliar a relação entre tipo de descrição, consumo de área e atraso de propagação. A arquitetura de melhor relação custo benefício requer 2.090 células de um dispositivo FPGA, podendo operar a até 78,72 MHz, proporcionando uma taxa de processamento de 28,2 milhões de amostras por segundo. Esta arquitetura resultou em um nível de erro médio quadrático de 0,41% para cada nível de transformada. A arquitetura implementada para o bloco do codificador de entropia foi sintetizada a partir de uma descrição comportamental, gerando um hardware capaz de processar até 843 mil coeficientes de entrada por segundo. Os resultados indicam que o compressor de imagens com perdas seguindo o padrão JPEG2000, utilizando os blocos implementados nesta dissertação e operando na máxima freqüência de operação definida, pode codificar em média 1,8 milhões de coeficientes por segundo, ou seja, até 27 frames de 256x256 pixels por segundo. Esta limitação na taxa de codificação é definida pelo codificador de entropia, que possui um algoritmo mais complexo, necessitando de um trabalho complementar para melhorar sua taxa de codificação aumentando o paralelismo do hardware. / The increasing demands for higher data transmission rates and higher data storage capacity call for the development of techniques to increase the compression rate of images while at the same time keeping the image quality. The JPEG2000 Standard proposes the use of the discrete wavelet transform and of arithmetic coding to reach high compression rates, providing reasonable quality to the resulting compressed image. This standard allows lossy as well as loss-less compression, dependent on the type of wavelet transform used. This work considers the implementation of the internal hardware blocks that comprise a lossy image compressor in hardware following the JPEG2000 standard. The main component of this image compressor is the two dimensional irreversible discrete wavelet transform, that is implemented using a lifting scheme with the Daubechies 9/7 coefficients presented in the literature. To provide high compression rates for the irreversible transform, these coefficients – originally proposed in their floating-point representation – are used. In this work, they are implemented as fixed-point rounded coefficients, incurring in errors that we estimate and control. In this work, various hardware architectures for the two dimensional irreversible discrete wavelet transform were implemented to evaluate the tradeoff between the type of description, area consumption and delay. The architecture for the best trade-off requires 2,090 logic cells of a FPGA device, being able to operate up to 78.72 MHz, providing a processing rate of 28.2 million of samples per second. This architecture resulted in 0.41% of mean quadratic error for each transformed octave. The architecture implemented for the block of the entropy encoder was synthesized from a behavioral description, generating the hardware able to process up to 843 thousands of input coefficients per second. The results indicate that the lossy image compressor following JPEG2000 standard, using the blocks implemented in this dissertation and operating in the maximum clock frequency can codify, in average, 1.8 million coefficients per second, or conversely, up to 27 frames of 256x256 pixels per second. The rate-limiting step in this case is the entropy encoder, which has a more complex algorithm that needs further work to be sped up with more parallel hardware.
152

Projeto de uma arquitetura dedicada à compressão de imagens no padrão JPEG2000 / Design of a dedicated architecture to Image compression in the JPEG2000 Standard

Silva, Sandro Vilela da January 2005 (has links)
O incremento das taxas de transmissão e de armazenamento demanda o desenvolvimento de técnicas para aumentar a taxa de compressão de imagens e ao mesmo tempo mantenha a qualidade destas imagens. O padrão JPEG2000 propõe a utilização da transformada wavelet discreta e codificação aritmética para alcançar altos graus de compressão, proporcionando que a imagem resultante tenha qualidade razoável. Este padrão permite tanto compressão com perdas como compressão sem perdas, dependendo apenas do tipo de transformada wavelet utilizada. Este trabalho propõe a implementação de blocos internos em hardware para compor um compressor de imagens com perdas seguindo o padrão JPEG2000. O principal componente deste compressor de imagens é a transformada wavelet discreta irreversível em duas dimensões, que é implementada utilizando um esquema lifting a partir dos coeficientes Daubechies 9/7 descritos na literatura. Para proporcionar altas taxas de compressão para a transformada irreversível, são utilizados coeficientes reais – que são originalmente propostos em representação de ponto-flutuante. Neste trabalho, estes coeficientes foram implementados em formato de ponto-fixo arredondado, o que resulta erros que foram estimados e controlados. Neste trabalho, várias arquiteturas em hardware para a descrição da transformada wavelet discreta irreversível em duas dimensões foram implementadas para avaliar a relação entre tipo de descrição, consumo de área e atraso de propagação. A arquitetura de melhor relação custo benefício requer 2.090 células de um dispositivo FPGA, podendo operar a até 78,72 MHz, proporcionando uma taxa de processamento de 28,2 milhões de amostras por segundo. Esta arquitetura resultou em um nível de erro médio quadrático de 0,41% para cada nível de transformada. A arquitetura implementada para o bloco do codificador de entropia foi sintetizada a partir de uma descrição comportamental, gerando um hardware capaz de processar até 843 mil coeficientes de entrada por segundo. Os resultados indicam que o compressor de imagens com perdas seguindo o padrão JPEG2000, utilizando os blocos implementados nesta dissertação e operando na máxima freqüência de operação definida, pode codificar em média 1,8 milhões de coeficientes por segundo, ou seja, até 27 frames de 256x256 pixels por segundo. Esta limitação na taxa de codificação é definida pelo codificador de entropia, que possui um algoritmo mais complexo, necessitando de um trabalho complementar para melhorar sua taxa de codificação aumentando o paralelismo do hardware. / The increasing demands for higher data transmission rates and higher data storage capacity call for the development of techniques to increase the compression rate of images while at the same time keeping the image quality. The JPEG2000 Standard proposes the use of the discrete wavelet transform and of arithmetic coding to reach high compression rates, providing reasonable quality to the resulting compressed image. This standard allows lossy as well as loss-less compression, dependent on the type of wavelet transform used. This work considers the implementation of the internal hardware blocks that comprise a lossy image compressor in hardware following the JPEG2000 standard. The main component of this image compressor is the two dimensional irreversible discrete wavelet transform, that is implemented using a lifting scheme with the Daubechies 9/7 coefficients presented in the literature. To provide high compression rates for the irreversible transform, these coefficients – originally proposed in their floating-point representation – are used. In this work, they are implemented as fixed-point rounded coefficients, incurring in errors that we estimate and control. In this work, various hardware architectures for the two dimensional irreversible discrete wavelet transform were implemented to evaluate the tradeoff between the type of description, area consumption and delay. The architecture for the best trade-off requires 2,090 logic cells of a FPGA device, being able to operate up to 78.72 MHz, providing a processing rate of 28.2 million of samples per second. This architecture resulted in 0.41% of mean quadratic error for each transformed octave. The architecture implemented for the block of the entropy encoder was synthesized from a behavioral description, generating the hardware able to process up to 843 thousands of input coefficients per second. The results indicate that the lossy image compressor following JPEG2000 standard, using the blocks implemented in this dissertation and operating in the maximum clock frequency can codify, in average, 1.8 million coefficients per second, or conversely, up to 27 frames of 256x256 pixels per second. The rate-limiting step in this case is the entropy encoder, which has a more complex algorithm that needs further work to be sped up with more parallel hardware.
153

Codificação de vídeo baseada em fractais e representações esparsas / Video coding based on fractals and sparse representations

Lima, Vitor de, 1985- 03 December 2012 (has links)
Orientador: Hélio Pedrini / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação / Made available in DSpace on 2018-08-20T14:03:42Z (GMT). No. of bitstreams: 1 Lima_Vitorde_M.pdf: 2877007 bytes, checksum: 6ad47f821cd5730035e715cb48102877 (MD5) Previous issue date: 2012 / Resumo: Vídeos são sequências de imagens estáticas representando cenas em movimento. Transmitir e armazenar essas imagens sem nenhum tipo de pré-processamento necessitaria de enormes larguras de banda nos canais de comunicação e uma quantidade massiva de espaço de armazenamento. A fim de reduzir o número de bits necessários para tais dados, foram criados métodos de compressão com perda. Esses métodos geralmente consistem em um codificador e um decodificador, tal que o codificador gera uma sequência de bits que representa uma aproximação razoável do vídeo através de um formato pré-especificado e o decodificador lê essa sequência, convertendo-a novamente em uma série de imagens. A transmissão de vídeos sob restrições extremas de largura de banda tem aplicações importantes como videoconferências e circuitos fechados de televisão. Neste trabalho são abordados dois métodos destinados a essa aplicação, decomposição usando representações esparsas e compressão fractal. A ampla maioria dos codificadores tem como mecanismo principal o uso de transformações inversíveis capazes de representar imagens espacialmente suaves com poucos coeficientes não-nulos. Representações esparsas são uma generalização dessa ideia, em que a transformação tem como base um conjunto cujo número de elementos excede a dimensão do espaço vetorial onde ela opera. A projeção dos dados pode ser feita a partir de uma heurística rápida chamada Matching Pursuit. Uma abordagem combinando essa heurística com um algoritmo para gerar a base sobrecompleta por aprendizado de máquina é apresentada. Codificadores fractais representam uma aproximação da imagem como um sistema de funções iterativas. Para isso, criam e transmitem uma sequência de comandos, chamada colagem, capazes de obter uma representação da imagem na escala original dada a mesma imagem em uma escala reduzida. A colagem é criada de tal forma que, se aplicada a uma imagem inicial qualquer repetidas vezes, reduzindo sua escala antes de toda iteração, converge em uma aproximação da imagem codificada. Métodos simplificados e rápidos para a criação da colagem e uma generalização desses métodos para a compressão de vídeos são apresentados. Ao invés de construir a colagem tentando mapear qualquer bloco da escala reduzida na escala original, apenas um conjunto pequeno de blocos é considerado. O método de compressão proposto para vídeos agrupa um conjunto de quadros consecutivos do vídeo em um fractal volumétrico. A colagem mapeia blocos tridimensionais entre as escalas, considerando uma escala menor tanto no tempo quanto no espaço. Uma adaptação desse método para canais de comunicação cuja largura de banda é instável também é proposta / Abstract: A video is a sequence of still images representing scenes in motion. A video is a sequence of extremely similar images separated by abrupt changes in their content. If these images were transmitted and stored without any kind of preprocessing, this would require a massive amount of storage space and communication channels with very high bandwidths. Lossy compression methods were created in order to reduce the number of bits used to represent this kind of data. These methods generally consist in an encoder and a decoder, where the encoder generates a sequence of bits that represents an acceptable approximation of the video using a certain predefined format and the decoder reads this sequence, converting it back into a series of images. Transmitting videos under extremely limited bandwidth has important applications in video conferences or closed-circuit television systems. Two different approaches are explored in this work, decomposition based on sparse representations and fractal coding. Most video coders are based on invertible transforms capable of representing spatially smooth images with few non-zero coeficients. Sparse representations are a generalization of this idea using a transform that has an overcomplete dictionary as a basis. Overcomplete dictionaries are sets with more elements in it than the dimension of the vector space in which the transform operates. The data can be projected into this basis using a fast heuristic called Matching Pursuits. A video encoder combining this fast heuristic with a machine learning algorithm capable of constructing the overcomplete dictionary is proposed. Fractal encoders represent an approximation of the image through an iterated function system. In order to do that, a sequence of instructions, called a collage, is created and transmitted. The collage can construct an approximation of the original image given a smaller scale version of it. It is created in such a way that, when applied to any initial image several times, contracting it before each iteration, it converges into an approximation of the encoded image. Simplier and faster methods for creating a collage and a generalization of these methods to video compression are presented. Instead of constructing a collage by matching any block from the smaller scale to the original one, a small subset of possible matches is considered. The proposed video encoding method creates groups of consecutive frames which are used to construct a volumetric fractal. The collage maps tridimensional blocks between the different scales, using a smaller scale in both space and time. An improved version of this algorithm designed for communication channels with variable bandwidth is presented / Mestrado / Ciência da Computação / Mestre em Ciência da Computação
154

Lossless medical image compression through lightweight binary arithmetic coding

Bartrina Rapesta, Joan, Sanchez, Victor, Serra Sagrsità, Joan, Marcellin, Michael W., Aulí Llinàs, Francesc, Blanes, Ian 19 September 2017 (has links)
A contextual lightweight arithmetic coder is proposed for lossless compression of medical imagery. Context definition uses causal data from previous symbols coded, an inexpensive yet efficient approach. To further reduce the computational cost, a binary arithmetic coder with fixed-length codewords is adopted, thus avoiding the normalization procedure common in most implementations, and the probability of each context is estimated through bitwise operations. Experimental results are provided for several medical images and compared against state-of-the-art coding techniques, yielding on average improvements between nearly 0.1 and 0.2 bps.
155

Deep learning for image compression / Apprentissage profond pour la compression d'image

Dumas, Thierry 07 June 2019 (has links)
Ces vingt dernières années, la quantité d’images et de vidéos transmises a augmenté significativement, ce qui est principalement lié à l’essor de Facebook et Netflix. Même si les capacités de transmission s’améliorent, ce nombre croissant d’images et de vidéos transmises exige des méthodes de compression plus efficaces. Cette thèse a pour but d’améliorer par l’apprentissage deux composants clés des standards modernes de compression d’image, à savoir la transformée et la prédiction intra. Plus précisément, des réseaux de neurones profonds sont employés car ils ont un grand pouvoir d’approximation, ce qui est nécessaire pour apprendre une approximation fidèle d’une transformée optimale (ou d’un filtre de prédiction intra optimal) appliqué à des pixels d’image. En ce qui concerne l’apprentissage d’une transformée pour la compression d’image via des réseaux de neurones, un défi est d’apprendre une transformée unique qui est efficace en termes de compromis débit-distorsion, à différents débits. C’est pourquoi deux approches sont proposées pour relever ce défi. Dans la première approche, l’architecture du réseau de neurones impose une contrainte de parcimonie sur les coefficients transformés. Le niveau de parcimonie offre un contrôle sur le taux de compression. Afin d’adapter la transformée à différents taux de compression, le niveau de parcimonie est stochastique pendant la phase d’apprentissage. Dans la deuxième approche, l’efficacité en termes de compromis débit-distorsion est obtenue en minimisant une fonction de débit-distorsion pendant la phase d’apprentissage. Pendant la phase de test, les pas de quantification sont progressivement agrandis selon un schéma afin de compresser à différents débits avec une unique transformée apprise. Concernant l’apprentissage d’un filtre de prédiction intra pour la compression d’image via des réseaux de neurones, le problème est d’obtenir un filtre appris qui s’adapte à la taille du bloc d’image à prédire, à l’information manquante dans le contexte de prédiction et au bruit de quantification variable dans ce contexte. Un ensemble de réseaux de neurones est conçu et entraîné de façon à ce que le filtre appris soit adaptatif à ces égards. / Over the last twenty years, the amount of transmitted images and videos has increased noticeably, mainly urged on by Facebook and Netflix. Even though broadcast capacities improve, this growing amount of transmitted images and videos requires increasingly efficient compression methods. This thesis aims at improving via learning two critical components of the modern image compression standards, which are the transform and the intra prediction. More precisely, deep neural networks are used for this task as they exhibit high power of approximation, which is needed for learning a reliable approximation of an optimal transform (or an optimal intra prediction filter) applied to image pixels. Regarding the learning of a transform for image compression via neural networks, a challenge is to learn an unique transform that is efficient in terms of rate-distortion while keeping this efficiency when compressing at different rates. That is why two approaches are proposed to take on this challenge. In the first approach, the neural network architecture sets a sparsity on the transform coefficients. The level of sparsity gives a direct control over the compression rate. To force the transform to adapt to different compression rates, the level of sparsity is stochastically driven during the training phase. In the second approach, the rate-distortion efficiency is obtained by minimizing a rate-distortion objective function during the training phase. During the test phase, the quantization step sizes are gradually increased according a scheduling to compress at different rates using the single learned transform. Regarding the learning of an intra prediction filter for image compression via neural networks, the issue is to obtain a learned filter that is adaptive with respect to the size of the image block to be predicted, with respect to missing information in the context of prediction, and with respect to the variable quantization noise in this context. A set of neural networks is designed and trained so that the learned prediction filter has this adaptibility.
156

Abstraction et traitement de masses de données 3D animées / Abstraction and processing of large amounts of animated 3D data

Buchholz, Bert 20 December 2012 (has links)
Dans cette thèse, nous explorons des structures intermédiaires ainsi que le rapport entre eux et des algorithmes utilisés dans le contexte du rendu photoréaliste (RP) et non photoréaliste (RNP). Nous présentons des nouvelles structures pour le rendu et l'utilisation alternative des structures existantes. Nous présentons trois contributions principales dans les domaines RP et RNP: Nous montrons une méthode pour la génération des images stylisées noir et blanc. Notre approche est inspirée par des bandes dessinées, utilisant l'apparence et la géometrie dans une formulation d'énérgie basée sur un graphe 2D. En contrôlant les énérgies, l'utilisateur peut générer des images de differents styles et représentations. Dans le deuxième travail, nous proposons une nouvelle méthode pour la paramétrisation temporellement cohérente des lignes animées pour la texturisation. Nous introduisons une structure spatiotemporelle et une formulation d'énérgie permettant une paramétrisation globalement optimale. La formulation par une énérgie donne un contrôle important et simple sur le résultat. Finalement, nous présentons une extension sur une méthode de l'illumination globale (PBGI) utilisée dans la production de films au cours des dernières années. Notre extension effectue une compression par quantification de données générées par l'algorithme original. Le coût ni de memoire ni de temps excède considérablement celui de la méthode d'origin et permet ainsi le rendu des scènes plus grande. L'utilisateur a un contrôle facile du facteur et de la qualité de compression. Nous proposons un nombre d'extensions ainsi que des augmentations potentielles pour les méthodes présentées. / In this thesis, we explore intermediary structures and their relationship to the employed algorithms in the context of photorealistic (PR) and non-photorealistic (NPR) rendering. We present new structures for rendering as well as new uses for existing structures. We present three original contributions in the NPR and PR domain: First, we present binary shading, a method to generate stylized black and white images, inspired by comic artists, using appearance and geometry in a graph-based energy formulation. The user can control the algorithm to generate images of different styles and representations. The second work allows the temporally coherent parameterization of line animations for texturing purposes. We introduce a spatio-temporal structure over the input data and an energy formulation for a globally optimal parameterization. Similar to the work on binary shading, the energy formulation provides a an important and simple control over the output. Finally, we present an extension to Point-based Global Illumination, a method used extensively in movie production during the last years. Our work allows compressing the data generated by the original algorithm using quantification. It is memory-efficient and has only a neglegible time overhead while enabling the rendering of larger scenes. The user can easily control the strength and quality of the compression. We also propose a number of possible extensions and improvements to the methods presented in the thesis.
157

Texture Segmentation Using Fractal Features

Pongratananukul, Nattorn 01 January 2000 (has links)
In this work, we propose the application fractal compression techniques to textured images segmentation, and use the transformation coefficients as features for segmentation. The result is improved by combining fractal dimension feature and the transformation coefficients from the original and its filter versions. Feature vectors are clustered together using K-mean algorithm with features pre-smoothing. The numbers of feature are minimized to reach the compromise result. In the integrated approach, we attempt to improve segmentation of texture images using our method. Background knowledge of image segmentation and image compression will be presented. Algorithms for fractal dimension calculation, K-means clustering, and fractal compression is given. Experimental results are included, and possible future work is mentioned.
158

Saliency driven lossy image compression using machine learning

Fällman, Sebastian January 2023 (has links)
Since the introduction of digital media for consumers in the 1990s, the amount of images used in everyday life has grown exponentially. All these images need to be stored and broadcasted from some storage device, which has been made possible with the use of image compression algorithms such as JPG. This thesis focuses on exploring the feasibility of using machine learning together with saliency maps to compress images. Two machine learning models were developed, where one used a technique combining the corresponding saliency map with the image as a preprocessing step. The other used a saliency-driven loss function which resulted in much better overall performance. The saliency-driven loss function outperformed the JPG standard on extreme compression rates, however, JPG had better performance on lower and more qualitative compression rates.
159

Compression of Large-Scale Aerial Imagery : Exploring Set Redundancy Methods

Lüdeking, Solvej January 2023 (has links)
Compression of data has been historically always important; more data is gettingproduced and therefore has to be stored. While hardware technology advances,compression should be a must to reduce storage occupied and to keep the data intransmission as small as possible. Set redundancy has been developed in 1996 but has since then not received a lot ofattention in research. This paper tries to implement two set redundancy methods –the Max-Min-Predictive II and also the Intensity Mapping algorithm to see if thismethod could be used on large scale aerial imagery in the geodata field. After using the set redundancy methods, different individual image compressionmethods were applied and compared to the standard JPEG2000 in lossless mode.These compression algorithms were Huffman, LZW, and JPEG2000 itself. The data sets used were two images each taken from 2019, one pair with 60% overlap,the other with 80% overlap. Individual compression of images is still offering abetter compression ratio, but the set redundancy method produces results which areworth investigating further with more images in a set of similar images. This points to future work of compressing a larger set with more overlap and moreimages, which for greater potential matching should be overlaid more carefully toensure matching pixel values. / Datakomprimering har historiskt alltid varit viktigt; mer data än någonsin producerasoch behöver lagras. Trots teknologiska framsteg inom lagrings- och datateknologierär komprimering ett måste för att reducera mängden lagring som krävs och underlättavid överföringar genom att mindre filmängd måste skickas. Set redundancy utvecklades 1996, men har sedan dess inte fått så mycket uppmärksamhetinom forskning. Det här pappret försöker implementera två olika set redundancy-metoder – Max-Min-Predictive II och Intensity Mapping algoritmen, för att se omdenna metod kan användas på flygbilder från storskalig flygbildsinsamling. Efter användandet av set redundancy metoder på ett set av flygbilder, utnyttjadesandra bildkomprimeringsmetoder för enskilda bilder på resultatet, detta jämfördesmed den icke-förstörande JPEG2000 komprimeringen av originalbilderna. Komprimeringsalgoritmernasom användes på set redundancy-resultatet var Huffman, LZW,och JPEG2000. Det dataset som användes bestod av två par av bilder från 2019, där en hade överlapppå 60% och det andra paret på 80%. Individuell komprimering av dataseten gaven högre komprimeringsgrad än set redundancy metoder, men set redundancy har enskalningspotential när fler bilder läggs till i ett set, vilket är värt att undersöka vidare. Detta pekar på framtida arbeten där komprimering av större dataset med högreöverlapp mellan bilder, som med en högre geografisk korrekthet läses in ovanpåvarandra, kan testas.
160

Development of an Imager System Optimized for Low-Power, Limited-Bandwidth Space Applications

Glassey, Kalia R 01 April 2009 (has links) (PDF)
A relatively new picosatellite standard, CubeSats have traditionally been used for simple educational missions. As CubeSats become more complex and utilize more complex sensors such as imagers, they gain enhanced credibility as satellite platforms. Imaging systems on CubeSats have the potential to be used for a variety of uses, such as earth and weather monitoring, attitude determination, and remote sensing. However the size and power limitations of CubeSats pose an interesting challenge to the design of a capable, robust imaging system. This thesis outlines the objectives and requirements of CP-3’s imaging system, and describes the development process and methods. Test results from the imaging system are included, as well as lessons learned gleaned from CP-3’s on-orbit operations. This document can serve as a guideline for other teams wishing to develop imaging systems. While other developers may have different requirements or constraints, this roadmap illustrates each of the many considerations that must be taken into account when designing an imaging system.

Page generated in 0.1014 seconds