• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 15
  • 5
  • 3
  • 2
  • Tagged with
  • 29
  • 29
  • 11
  • 8
  • 6
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Analysis Of Koch Fractal Antennas

Irgin, Umit 01 June 2009 (has links) (PDF)
Fractal is a recursively-generated object describing a family of complex shapes that possess an inherent self-similarity in their geometrical structure. When used in antenna engineering, fractal geometries provide multi-band characteristics and lowering resonance frequencies by enhancing the space filling property. Moreover, utilizing fractal arrays, controlling side lobe-levels and radiation patterns can be realized. In this thesis, the performance of Koch curve as antenna is investigated. Since fractals are complex shapes, there is no well&ndash / established for mathematical formulation to obtain the radiation properties and frequency response of Koch Curve antennas directly. The Koch curve antennas became famous since they exhibit better frequency response than their Euclidean counterparts. The effect of the parameters of Koch geometry to antenna performance is studied in this thesis. Moreover, modified Koch geometries are generated to obtain the relation between fractal properties and antenna radiation and frequency characteristics.
22

Explorando dados provindos da internet em dispositivos móveis: uma abordagem baseada em visualização de informação / Exploring web data on mobile devices: an approach based on information visualization

Felipe Simões Lage Gomes Duarte 12 February 2015 (has links)
Com o progresso da computação e popularização da Internet, a sociedade entrou na era da informação. Esta nova fase é marcada pela forma como produzimos e lidamos com a informação. Diariamente, são produzidos e armazenados milhares de Gigabytes de dados cujo valor é reduzido se a informação ali contida não puder ser transformada em conhecimento. Concomitante a este fato, o padrão da computação está caminhando para a miniaturização e acessibilidade com os dispositivos móveis. Estes novos equipamentos estão mudando o padrão de comportamento dos usuários que passam de leitores passivos a geradores de conteúdo. Neste contexto, este projeto de mestrado propõe a técnica de visualização de dados NMap e a ferramenta de visualização de dados web aplicável a dispositivo móvel SPCloud. A técnica NMap utiliza todo o espaço visual disponível para transmitir informações de grupos preservando a metáfora distância-similaridade. Teste comparativos com as principais técnicas do estado da arte mostraram que a técnica NMap tem melhores resultados em preservação de vizinhança com um tempo de processamento significativamente melhor. Este fato coloca a NMap como uma das principais técnicas de ocupação do espaço visual. A ferramenta SPCloud utiliza a NMap para visualizar notícias disponíveis na web. A ferramenta foi desenvolvida levando em consideração as características inerentes aos dispositivos moveis o que possibilita utiliza-la nestes equipamentos. Teste informais com usuários demonstraram que a ferramenta tem um bom desempenho para sumarizar grandes quantidades de notícias em um pequeno espaço visual. / With the development of computers and the increasing popularity of the Internet, our society has entered the information age. This era is marked by the way we produce and deal with information. Everyday, thousand of Gigabytes are stored, but their value is reduced if the data cannot be transformed into knowledge. Concomitantly, computing is moving towards miniaturization and affordability of mobile devices, which are changing users behavior who move from passive readers to content generators. In this context, in this master thesis we propose and develop a data visualization technique, called NMap, and a web data visualization tool for mobile devices, called SPCloud. NMap uses all available visual space to transmit information preserving the metaphor of distance-similarity between elements. Comparative evaluations were performed with the state of the art techniques and the result has shown that NMap produces better results of neighborhood preservation with a significant improvement in processing time. Our results place NMap as a major space-filling technique establishing a new state of the art. SPCloud, a tool which uses NMap to present news available on the web, was developed taking into account the inherent characteristics of mobile devices. Informal user tests revealed that SPCloud performs well to summarize large amounts of news in a small visual space.
23

A Graphics Processing Unit Based Discontinuous Galerkin Wave Equation Solver with hp-Adaptivity and Load Balancing

Tousignant, Guillaume 13 January 2023 (has links)
In computational fluid dynamics, we often need to solve complex problems with high precision and efficiency. We propose a three-pronged approach to attain this goal. First, we use the discontinuous Galerkin spectral element method (DG-SEM) for its high accuracy. Second, we use graphics processing units (GPUs) to perform our computations to exploit available parallel computing power. Third, we implement a parallel adaptive mesh refinement (AMR) algorithm to efficiently use our computing power where it is most needed. We present a GPU DG-SEM solver with AMR and dynamic load balancing for the 2D wave equation. The DG-SEM is a higher-order method that splits a domain into elements and represents the solution within these elements as a truncated series of orthogonal polynomials. This approach combines the geometric flexibility of finite-element methods with the exponential convergence of spectral methods. GPUs provide a massively parallel architecture, achieving a higher throughput than traditional CPUs. They are relatively new as a platform in the scientific community, therefore most algorithms need to be adapted to that new architecture. We perform most of our computations in parallel on multiple GPUs. AMR selectively refines elements in the domain where the error is estimated to be higher than a prescribed tolerance, via two mechanisms: p-refinement increases the polynomial order within elements, and h-refinement splits elements into several smaller ones. This provides a higher accuracy in important flow regions and increases capabilities of modeling complex flows, while saving computing power in other parts of the domain. We use the mortar element method to retain the exponential convergence of high-order methods at the non-conforming interfaces created by AMR. We implement a parallel dynamic load balancing algorithm to even out the load imbalance caused by solving problems in parallel over multiple GPUs with AMR. We implement a space-filling curve-based repartitioning algorithm which ensures good locality and small interfaces. While the intense calculations of the high order approach suit the GPU architecture, programming of the highly dynamic adaptive algorithm on GPUs is the most challenging aspect of this work. The resulting solver is tested on up to 64 GPUs on HPC platforms, where it shows good strong and weak scaling characteristics. Several example problems of increasing complexity are performed, showing a reduction in computation time of up to 3× on GPUs vs CPUs, depending on the loading of the GPUs and other user-defined choices of parameters. AMR is shown to improve computation times by an order of magnitude or more.
24

Fractal sets and dimensions

Leifsson, Patrik January 2006 (has links)
<p>Fractal analysis is an important tool when we need to study geometrical objects less regular than ordinary ones, e.g. a set with a non-integer dimension value. It has developed intensively over the last 30 years which gives a hint to its young age as a branch within mathematics.</p><p>In this thesis we take a look at some basic measure theory needed to introduce certain definitions of fractal dimensions, which can be used to measure a set's fractal degree. Comparisons of these definitions are done and we investigate when they coincide. With these tools different fractals are studied and compared.</p><p>A key idea in this thesis has been to sum up different names and definitions referring to similar concepts.</p>
25

Fractal sets and dimensions

Leifsson, Patrik January 2006 (has links)
Fractal analysis is an important tool when we need to study geometrical objects less regular than ordinary ones, e.g. a set with a non-integer dimension value. It has developed intensively over the last 30 years which gives a hint to its young age as a branch within mathematics. In this thesis we take a look at some basic measure theory needed to introduce certain definitions of fractal dimensions, which can be used to measure a set's fractal degree. Comparisons of these definitions are done and we investigate when they coincide. With these tools different fractals are studied and compared. A key idea in this thesis has been to sum up different names and definitions referring to similar concepts.
26

Contributions to quality improvement methodologies and computer experiments

Tan, Matthias H. Y. 16 September 2013 (has links)
This dissertation presents novel methodologies for five problem areas in modern quality improvement and computer experiments, i.e., selective assembly, robust design with computer experiments, multivariate quality control, model selection for split plot experiments, and construction of minimax designs. Selective assembly has traditionally been used to achieve tight specifications on the clearance of two mating parts. Chapter 1 proposes generalizations of the selective assembly method to assemblies with any number of components and any assembly response function, called generalized selective assembly (GSA). Two variants of GSA are considered: direct selective assembly (DSA) and fixed bin selective assembly (FBSA). In DSA and FBSA, the problem of matching a batch of N components of each type to give N assemblies that minimize quality cost is formulated as axial multi-index assignment and transportation problems respectively. Realistic examples are given to show that GSA can significantly improve the quality of assemblies. Chapter 2 proposes methods for robust design optimization with time consuming computer simulations. Gaussian process models are widely employed for modeling responses as a function of control and noise factors in computer experiments. In these experiments, robust design optimization is often based on average quadratic loss computed as if the posterior mean were the true response function, which can give misleading results. We propose optimization criteria derived by taking expectation of the average quadratic loss with respect to the posterior predictive process, and methods based on the Lugannani-Rice saddlepoint approximation for constructing accurate credible intervals for the average loss. These quantities allow response surface uncertainty to be taken into account in the optimization process. Chapter 3 proposes a Bayesian method for identifying mean shifts in multivariate normally distributed quality characteristics. Multivariate quality characteristics are often monitored using a few summary statistics. However, to determine the causes of an out-of-control signal, information about which means shifted and the directions of the shifts is often needed. We propose a Bayesian approach that gives this information. For each mean, an indicator variable that indicates whether the mean shifted upwards, shifted downwards, or remained unchanged is introduced. Default prior distributions are proposed. Mean shift identification is based on the modes of the posterior distributions of the indicators, which are determined via Gibbs sampling. Chapter 4 proposes a Bayesian method for model selection in fractionated split plot experiments. We employ a Bayesian hierarchical model that takes into account the split plot error structure. Expressions for computing the posterior model probability and other important posterior quantities that require evaluation of at most two uni-dimensional integrals are derived. A novel algorithm called combined global and local search is proposed to find models with high posterior probabilities and to estimate posterior model probabilities. The proposed method is illustrated with the analysis of three real robust design experiments. Simulation studies demonstrate that the method has good performance. The problem of choosing a design that is representative of a finite candidate set is an important problem in computer experiments. The minimax criterion measures the degree of representativeness because it is the maximum distance of a candidate point to the design. Chapter 5 proposes algorithms for finding minimax designs for finite design regions. We establish the relationship between minimax designs and the classical set covering location problem in operations research, which is a binary linear program. We prove that the set of minimax distances is the set of discontinuities of the function that maps the covering radius to the optimal objective function value, and optimal solutions at the discontinuities are minimax designs. These results are employed to design efficient procedures for finding globally optimal minimax and near-minimax designs.
27

Range Searching Data Structures with Cache Locality

Hamilton, Christopher 17 March 2011 (has links)
This thesis focuses on range searching data structures, an elementary problem in computational geometry with research spanning decades. These problems often involve very large data sets. Processor speeds increase faster than memory speeds, thus the gap between the rate at which CPUs can process data and the rate at which it can be retrieved is increasing. To bridge this gap, various levels of cache are used. Since cache misses are costly, algorithms should be cache-friendly. The input-output (I/O) model was the first model for constructing cache-efficient algorithms, focusing on a two-level memory hierarchy. Algorithms for this model require manual tuning to determine optimal values for hardware dependent parameters, and are only optimal at a single level of a memory hierarchy. Cache-oblivious (CO) algorithms are built without knowledge of the hierarchy, allowing them to be optimal across all levels at once. There exist strong theoretical and practical results for I/O-efficient range searching. Recently, the CO model has received attention, but range searching remains poorly understood. This thesis explores data structures for CO range counting and reporting. It presents the first space and worst-case query-time optimal approximate range counting structure for a family of related problems, and associated O(N log N)-space query-optimal reporting structures. The approximate counting structure is the first of its kind in internal memory, I/O and CO models. Researchers have been trying to create linear-space query-optimal CO reporting structures. This thesis shows that for a variety of problems, linear space is in fact impossible. Heuristics are also used for building cache-friendly algorithms. Space-filling curves are continuous functions mapping multi-dimensional sets into one-dimensional ones. They are used to build search structures in the hopes that objects that were close in the original space remain close in the resulting ordering. This results in queries incurring fewer page swaps when traversing the structure. The Hilbert curve is notably good at this, but often imposes a space or time penalty. This thesis introduces compact Hilbert indices, which remove the ineffiency inherent for input point sets with bounding boxes smaller than their bounding hypercubes.
28

Image Structures For Steganalysis And Encryption

Suresh, V 04 1900 (has links) (PDF)
In this work we study two aspects of image security: improper usage and illegal access of images. In the first part we present our results on steganalysis – protection against improper usage of images. In the second part we present our results on image encryption – protection against illegal access of images. Steganography is the collective name for methodologies that allow the creation of invisible –hence secret– channels for information transfer. Steganalysis, the counter to steganography, is a collection of approaches that attempt to detect and quantify the presence of hidden messages in cover media. First we present our studies on stego-images using features developed for data stream classification towards making some qualitative assessments about the effect of steganography on the lower order bit planes(LSB) of images. These features are effective in classifying different data streams. Using these features, we study the randomness properties of image and stego-image LSB streams and observe that data stream analysis techniques are inadequate for steganalysis purposes. This provides motivation to arrive at steganalytic techniques that go beyond the LSB properties. We then present our steganalytic approach which takes into account such properties. In one such approach, we perform steganalysis from the point of view of quantifying the effect of perturbations caused by mild image processing operations–zoom-in/out, rotation, distortions–on stego-images. We show that this approach works both in detecting and estimating the presence of stego-contents for a particularly difficult steganographic technique known as LSB matching steganography. Next, we present our results on our image encryption techniques. Encryption approaches which are used in the context of text data are usually unsuited for the purposes of encrypting images(and multimedia objects) in general. The reasons are: unlike text, the volume to be encrypted could be huge for images and leads to increased computational requirements; encryption used for text renders images incompressible thereby resulting in poor use of bandwidth. These issues are overcome by designing image encryption approaches that obfuscate the image by intelligently re-ordering the pixels or encrypt only parts of a given image in attempts to render them imperceptible. The obfuscated image or the partially encrypted image is still amenable to compression. Efficient image encryption schemes ensure that the obfuscation is not compromised by the inherent correlations present in the image. Also they ensure that the unencrypted portions of the image do not provide information about the encrypted parts. In this work we present two approaches for efficient image encryption. First, we utilize the correlation preserving properties of the Hilbert space-filling-curves to reorder images in such a way that the image is obfuscated perceptually. This process does not compromise on the compressibility of the output image. We show experimentally that our approach leads to both perceptual security and perceptual encryption. We then show that the space-filling curve based approach also leads to more efficient partial encryption of images wherein only the salient parts of the image are encrypted thereby reducing the encryption load. In our second approach, we show that Singular Value Decomposition(SVD) of images is useful from the point of image encryption by way of mismatching the unitary matrices resulting from the decomposition of images. It is seen that the images that result due to the mismatching operations are perceptually secure.
29

Systèmes optiques interférentiels et incertitudes

Vasseur, Olivier 07 September 2012 (has links) (PDF)
Les développements technologiques permettent aujourd'hui l'élaboration de systèmes optiques interférentiels composés d'un grand nombre de composants. Ainsi, des formules de filtres diélectriques multicouches comportant plusieurs dizaines ou centaines de couches minces ont été proposées. La combinaison cohérente de plusieurs dizaines à plusieurs centaines de sources laser fibrées fait également l'objet de nombreux travaux de recherche. De même, d'autres systèmes comme les réseaux diffractifs bi et tridimensionnels composés d'un grand nombre d'ouvertures peuvent être étudiés. L'évaluation de la robustesse de tels systèmes interférentiels aux incertitudes de fabrication constitue un enjeu important mais d'autant plus difficile que le nombre de paramètres décrivant le système est grand. Dans ce document de synthèse, sont rappelés, dans un premier temps, les méthodologies liés aux plans d'expériences numériques et les résultats concernant leur qualité d'exploration des espaces de grandes dimensions au moyen de la construction d'un graphe : l'Arbre de Longueur Minimale. Dans une seconde partie, l'analyse de l'influence des incertitudes des paramètres d'entrée de systèmes interférentiels sur leurs performances est illustrée au moyen de deux applications : les filtres interférentiels multidiélectriques et la combinaison cohérente de sources laser fibrées. La méthodologie mise en oeuvre permet notamment d'identifier les incertitudes et les synergies les plus critiques au sein du système tout en construisant des métamodèles représentatifs. A partir de ces acquis, la caractérisation spatiale du speckle de surfaces rugueuses et plus généralement la caractérisation de la variabilité spatiale de phénomènes optiques sont ensuite explicitées. Enfin, les perspectives scientifiques issues de l'ensemble de ces activités de recherche sont développées. (PS : Les planches présentées lors de la soutenance ont été ajoutées en annexe du document original : pages 170 à 202).

Page generated in 0.082 seconds