• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 30
  • 14
  • 14
  • 10
  • 6
  • 3
  • 1
  • 1
  • Tagged with
  • 90
  • 16
  • 11
  • 10
  • 9
  • 9
  • 8
  • 8
  • 8
  • 8
  • 7
  • 7
  • 7
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Simulation of mico-lens array for LED lighting

Lin, Ming-Cheng 30 January 2008 (has links)
Recently liquid-crystal display (LCD) has the highest output value and most popular usage of flat panel display in monitor. The advantage of LCD is low electric quantity consumption, light quantity and thin thickness. LCD has substituted for the traditional screen monitor. The back light module, color filter and actuation IC are three major cost structures in LCD. The optical film is the most used structure in back light module, which is used to increase brightness, luminate evenly the panel and prove illuminant efficiency. The liquid crystal does not irradiate, therefore, the back light module is accessory of liquid crystal display, especially the LCD has advanced the size over 60 inches dimensions. When the panel enhances its size, the cost of back light module correspondently increase, even achieve to 50%. The importance of back light module in LCD can be understood. This study is to develop an innovation of gapless compound polygonal micro-lens. The manufacturing process includes lithography, reflow, electroforming and hot embossing technologies. Also the TracePro software is applied to facilitate the design. LED is used as the illuminant without diaphragm simulation. It enables the optical film to achieve an optimum condition. Using the gap optical film, we need to apply 3~4 pieces in the back light module. When we use the gapless optical film to make film to increase largely the light efficiency, we expect to use 1~2 pieces in the back light module. In this case it can reduce the cost and promote the competition of the domestic manufacturer.
12

Design and Fabrication of A Diffuser Film with Two Layers of Microlens Arrays

Chen, Ming-Fa 29 July 2009 (has links)
Integrated microlens array on a transparent film, called an optical film, provides interesting applications for various fields. In a FPD (Flat Panel Display), the optical films are the more important components to improve the efficiency and quality. In this dissertation, a diffuser film which consisted of two different microlens arrays on the two surfaces of a film was developed and used to enhance the brightness and uniformity of a light source. There were also several microlens arrays developed, such as a hexagonal microlens array with gap and gapless, a gapless dual-curvature microlens array and a diffuser film. A process called polygonal microlens array process had been used to manufacture them. It had advantages of mass production, various polygonal shapes and 100% fill-factor. A softer mold of PDMS and a metal mold of NiCo alloy were utilized to replicate the MLAs. In this dissertation, several replication processes were applied to mass product and to find out which one is more suitable for the diffuser film. In this dissertation, the results of different shapes and dimensions of microlens arrays showed various light distribution. Therefore, for searching a more suitable and novel layout of a diffuser, Taguchi Method with simulation was used to design the layout of a diffuser film before fabrication process. Finally, a diffuser film was measured and demonstrated its optical effects. According to the results of measurement and simulation, the average intensity and the S/N ratios were shown. The trend of simulation and measurement was also similar.
13

On Thin Shallow Elastic Shells Over Polygonal Bases

Walkinshaw, Douglas S. 10 1900 (has links)
<p> This thesis proposes to demonstrate, by means of numerieal examples, the applicability of the approximate solution for shallow, spherical, calotte shells enclosing polygonal bases for the purposes of practical design.</p> <p> The theoretical solution is based on a collocation procedure by means of which prescribed boundary conditions are satisfied at discrete boundary points and is derived from the general theory of MUSHTARI and VLASOV in which the transverse shear deformation of the shell is neglected in comparison with its transverse bending and extensional surface deformation.</p> / Thesis / Master of Engineering (MEngr)
14

Real-time Rendering of Burning Objects in Video Games

Amarasinghe, Dhanyu Eshaka 08 1900 (has links)
In recent years there has been growing interest in limitless realism in computer graphics applications. Among those, my foremost concentration falls into the complex physical simulations and modeling with diverse applications for the gaming industry. Different simulations have been virtually successful by replicating the details of physical process. As a result, some were strong enough to lure the user into believable virtual worlds that could destroy any sense of attendance. In this research, I focus on fire simulations and its deformation process towards various virtual objects. In most game engines model loading takes place at the beginning of the game or when the game is transitioning between levels. Game models are stored in large data structures. Since changing or adjusting a large data structure while the game is proceeding may adversely affect the performance of the game. Therefore, developers may choose to avoid procedural simulations to save resources and avoid interruptions on performance. I introduce a process to implement a real-time model deformation while maintaining performance. It is a challenging task to achieve high quality simulation while utilizing minimum resources to represent multiple events in timely manner. Especially in video games, this overwhelming criterion would be robust enough to sustain the engaging player's willing suspension of disbelief. I have implemented and tested my method on a relatively modest GPU using CUDA. My experiments conclude this method gives a believable visual effect while using small fraction of CPU and GPU resources.
15

Discrete Representation of Urban Areas through Simplification of Digital Elevation Data

Chittineni, Ruparani 10 May 2003 (has links)
In recent years there has been large increase in the amount of digital mapping data of landscapes and urban environments available through satellite imaging. This digital information can be used to develop wind flow simulators over large cities or regions for various purposes such as pollutant transport control, weather forecasts, cartography and other topographical analysis. It can also be used by architects for city planning or by game programmers for virtual reality and similar applications. But this data is massive and contains a lot of redundant information such as trees, cars, bushes, etc. For many applications, it is beneficial to reduce these huge amounts of data through elimination of unwanted information and provide a good approximate model of the original dataset. The resultant dataset can then be utilized to generate surface grids suitable for CFD purposes or can be used directly for real-time rendering or other graphics applications. Digital Elevation Model, DEM, is the most basic data type in which this digital data is available. It consists of a sampled array of elevations for ground positions that are regularly spaced in a Cartesian coordinate system. The purpose of this research is to construct and test a simple and economical prototype which caters to image procesing and data reduction of DEM images through noise elimination and compact representations of complex objects in the dataset. The model is aimed at providing a synergy between resultant image quality and its size through the generation of various levels of detail. An alternate approach using the concepts of standard deviation helps in achieving the desired goal and the results obtained by testing the model on Salt Lake City dataset verify the claims. Thus, this thesis is aimed at DEM image processing to provide a simple and compact representation of complex objects encountered in large scale urban environment datasets and reduce the size of the dataset to accommodate efficient storage, computation, fast transmission across networks and interactive visualization.
16

An algorithm to solve traveling-salesman problems in the presence of polygonal barriers

Gupta, Anil K. January 1985 (has links)
No description available.
17

Development of a Polygonal Finite Element Solver and Its Application to Fracture Problems

Kamble, Mithil 07 November 2017 (has links)
No description available.
18

The role of interactive visualizations in the advancement of mathematics

Alvarado, Alberto 29 November 2012 (has links)
This report explores the effect of interactive visualizations on the advancement of mathematics understanding. Not only do interactive visualizations aid mathematicians to expand the body of knowledge of mathematics but it also allows students an efficient way to process the information taught in schools. There are many concepts in mathematics that utilize interactive visualizations and examples of such concepts are illustrated within this report. / text
19

Apport d'un algorithme de segmentation ultra-rapide et non supervisé pour la conception de techniques de segmentation d'images bruitées / Contribution of an ultrafast and unsupervised segmentation algorithm to the conception of noisy images segmentation techniques

Liu, Siwei 16 December 2014 (has links)
La segmentation d'image constitue une étape importante dans le traitement d'image et de nombreuses questions restent ouvertes. Il a été montré récemment, dans le cas d'une segmentation à deux régions homogènes, que l'utilisation de contours actifs polygonaux fondés sur la minimisation d'un critère issu de la théorie de l'information permet d'aboutir à un algorithme ultra-rapide qui ne nécessite ni paramètre à régler dans le critère d'optimisation, ni connaissance a priori sur les fluctuations des niveaux de gris. Cette technique de segmentation rapide et non supervisée devient alors un outil élémentaire de traitement.L'objectif de cette thèse est de montrer les apports de cette brique élémentaire pour la conception de nouvelles techniques de segmentation plus complexes, permettant de dépasser un certain nombre de limites et en particulier :- d'être robuste à la présence dans les images de fortes inhomogénéités ;- de segmenter des objets non connexes par contour actif polygonal sans complexifier les stratégies d'optimisation ;- de segmenter des images multi-régions tout en estimant de façon non supervisée le nombre de régions homogènes présentes dans l'image.Nous avons pu aboutir à des techniques de segmentation non supervisées fondées sur l'optimisation de critères sans paramètre à régler et ne nécessitant aucune information sur le type de bruit présent dans l'image. De plus, nous avons montré qu'il était possible de concevoir des algorithmes basés sur l'utilisation de cette brique élémentaire, permettant d'aboutir à des techniques de segmentation rapides et dont la complexité de réalisation est faible dès lors que l'on possède une telle brique élémentaire. / Image segmentation is an important step in many image processing systems and many problems remain unsolved. It has recently been shown that when the image is composed of two homogeneous regions, polygonal active contour techniques based on the minimization of a criterion derived from information theory allow achieving an ultra-fast algorithm which requires neither parameter to tune in the optimized criterion, nor a priori knowledge on the gray level fluctuations. This algorithm can then be used as a fast and unsupervised processing module. The objective of this thesis is therefore to show how this ultra-fast and unsupervised algorithm can be used as a module in the conception of more complex segmentation techniques, allowing to overcome several limits and particularly:- to be robust to the presence of strong inhomogeneity in the image which is often inherent in the acquisition process, such as non-uniform illumination, attenuation, etc.;- to be able to segment disconnected objects by polygonal active contour without complicating the optimization strategy;- to segment multi-region images while estimating in an unsupervised way the number of homogeneous regions in the image.For each of these three problems, unsupervised segmentation techniques based on the optimization of Minimum Description Length criteria have been obtained, which do not require the tuning of parameter by user or a priori information on the kind of noise in the image. Moreover, it has been shown that fast segmentation techniques can be achieved using this segmentation module, while keeping reduced implementation complexity.
20

Modélisation géométrique à différent niveau de détails d'objets fabriqués par l'homme / Geometric modeling of man-made objects at different level of details

Fang, Hao 16 January 2019 (has links)
La modélisation géométrique d'objets fabriqués par l'homme à partir de données 3D est l'un des plus grands défis de la vision par ordinateur et de l'infographie. L'objectif à long terme est de générer des modèles de type CAO de la manière la plus automatique possible. Pour atteindre cet objectif, des problèmes difficiles doivent être résolus, notamment (i) le passage à l'échelle du processus de modélisation sur des données d'entrée massives, (ii) la robustesse de la méthodologie contre des mesures d'entrées erronés, et (iii) la qualité géométrique des modèles de sortie. Les méthodes existantes fonctionnent efficacement pour reconstruire la surface des objets de forme libre. Cependant, dans le cas d'objets fabriqués par l'homme, il est difficile d'obtenir des résultats dont la qualité approche celle des représentations hautement structurées, comme les modèles CAO. Dans cette thèse, nous présentons une série de contributions dans ce domaine. Tout d'abord, nous proposons une méthode de classification basée sur l'apprentissage en profondeur pour distinguer des objets dans des environnements complexes à partir de nuages de points 3D. Deuxièmement, nous proposons un algorithme pour détecter des primitives planaires dans des données 3D à différents niveaux d'abstraction. Enfin, nous proposons un mécanisme pour assembler des primitives planaires en maillages polygonaux compacts. Ces contributions sont complémentaires et peuvent être utilisées de manière séquentielle pour reconstruire des modèles de ville à différents niveaux de détail à partir de données 3D aéroportées. Nous illustrons la robustesse, le passage à l'échelle et l'efficacité de nos méthodes sur des données laser et multi-vues stéréo sur des scènes composées d'objets fabriqués par l'homme. / Geometric modeling of man-made objects from 3D data is one of the biggest challenges in Computer Vision and Computer Graphics. The long term goal is to generate a CAD-style model in an as-automatic-as-possible way. To achieve this goal, difficult issues have to be addressed including (i) the scalability of the modeling process with respect to massive input data, (ii) the robustness of the methodology to various defect-laden input measurements, and (iii) the geometric quality of output models. Existing methods work well to recover the surface of free-form objects. However, in case of manmade objects, it is difficult to produce results that approach the quality of high-structured representations as CAD models.In this thesis, we present a series of contributions to the field. First, we propose a classification method based on deep learning to distinguish objects from raw 3D point cloud. Second, we propose an algorithm to detect planar primitives in 3D data at different level of abstraction. Finally, we propose a mechanism to assemble planar primitives into compact polygonal meshes. These contributions are complementary and can be used sequentially to reconstruct city models at various level-of-details from airborne 3D data. We illustrate the robustness, scalability and efficiency of our methods on both laser and multi-view stereo data composed of man-made objects.

Page generated in 0.0612 seconds