• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 935
  • 173
  • 93
  • 66
  • 33
  • 32
  • 32
  • 32
  • 32
  • 32
  • 30
  • 30
  • 12
  • 8
  • 6
  • Tagged with
  • 1669
  • 1669
  • 255
  • 200
  • 189
  • 169
  • 160
  • 153
  • 149
  • 147
  • 144
  • 143
  • 143
  • 141
  • 128
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
591

ARCHGRAF.2 : a revision of ARCHGRAF an architectural graphics program

Law, Gary Wayne January 2011 (has links)
Typescript (photocopy). / Digitized by Kansas Correctional Industries
592

Computer Graphics: Conversion of Contour Line Definitions Into Polygonal Element Mosaics

Sederberg, Thomas W. 01 December 1977 (has links)
There has been a disparity between the conventional method of describing topographic surfaces (i.e. contour line definition) and a format of surface description often used in continuous-line computer graphics (i.e. panel definition). The two differ enough that conversion from contours to panels is not a trivial problem. A computer program that performs such a conversion would greatly facilitate continuous tone display of topographical surfaces, or any other surface which is defined by contour lines. This problem has been addressed by Keppel and alluded to by Fuchs. Keppel's is a highly systematic approach in which he uses graph theory to find the panel arrangement which maximizes the volume enclosed by concave surfaces. Fuchs mentions an approach to the problem as part of an algorithm to reconstruct a surface from data retrieved from a laser scan sensor. This thesis elaborates on a general conversion system. Following a brief overview of computer graphics, a simple algorithm is described which extracts a panel definition from a pair of adjacent contour loops subject to the restriction that the two loops are similarly sized and shaped, and are mutually centered. Next, a mapping procedure is described which greatly relaxes the above restrictions. It is also shown that the conversion from contours to panels is inherently ambiguous (to various degrees) and that occasionally the ambiguity is great enough to require user interaction to guide the conversion algorithm. An important complication addressed in this thesis is the problem of handling cases where one contour loop branches into two or more (or vice versa). Attention turns next to a contour line definition of the human brain, and special problems encountered in preparing those data for continuous tone display, The final chapters explain the fortran implementation, present an example-problem, and show sample pictures of the brain parts.
593

Capturer la géométrie dynamique vivante dans les cages / Capturing life-like dynamic geometry into cages

Savoye, Yann 19 December 2012 (has links)
Reconstruire, synthétiser, analyser et réutiliser les formes dynamiques capturées depuis le monde en mouvement est un défi récent qui reste encore en suspens. Dans cette thèse, nous abordons le problème de l'extraction, l'acquisition et la réutilisation d'une paramétrisation non-rigide pour l'animation basée vidéo. L'objectif principal étant de préserver les propriétés globales et locales de la surface capturée sans squelette articulé, grâce à un nombre limité de paramètres contrôlables, flexibles et réutilisables. Pour résoudre ce problème, nous nous appuyons sur une réduction de dimensions détachée de la surface reposant sur le paradigme de la représentation par cage. En conséquence, nous démontrons la force d'un sous-espace de la forme d'une cage géométrique pour encoder des surfaces fortement non-rigides. / Reconstructing, synthesizing, analyzing to re-using dynamic shapes that are captured from the real-world in motion isa recent and outstanding challenge. Nowadays, highly-detailed animations of live-actor performances are increasinglyeasier to acquire and 3D Video has reached considerable attention in visual media production. In this thesis, we addressthe problem of extracting or acquiring and then reusing non-rigid parametrization for video-based animations. At firstsight, a crucial challenge is to reproduce plausible boneless deformations while preserving global and local capturedproperties of the surface with a limited number of controllable, flexible and reusable parameters. To solve this challenge,we directly rely on a skin-detached dimension reduction thanks to the well-known cage-based paradigm. Indeed, to thebest of our knowledge, this dissertation opens the field of cage-based performance capture. First, we achieve ScalableInverse Cage-based Modeling by transposing the inverse kinematics paradigm on surfaces. To do this, we introduce acage inversion process with user-specified screen-space constraints. Secondly, we convert non-rigid animated surfacesinto a sequence of estimated optimal cage parameters via a process of Cage-based Animation Conversion. Building onthis reskinning procedure, we also develop a well-formed Animation Cartoonization algorithm for multi-view data in termof cage-based surface exaggeration and video-based appearance stylization. Thirdly, motivated by the relaxation of priorknowledge on the data, we propose a promising unsupervised approach to perform Iterative Cage-based GeometricRegistration. This novel registration scheme deals with reconstructed target point clouds obtained from multi-view videorecording, in conjunction with a static and wrinkled template mesh. Above all, we demonstrate the strength of cage-basedsubspaces in order to reparametrize highly non-rigid dynamic surfaces, without the need of secondary deformations. Inaddition, we state and discuss conclusions and several limitations of our cage-based strategies applied to life-like dynamicsurfaces, captured for vision-oriented applications. Finally, a variety of potential directions and open suggestions for furtherwork are outlined.
594

Removing Textured Artifacts from Digital Photos Using Spatial Frequency Filtering

Huang, Ben 01 January 2010 (has links)
An abstract of the thesis of Ben Huang for the Master of Science in Electric and Computer Science presented [August 12nd, 2010]. Title: Removing textured artifacts from digital photos by using spatial frequency filtering Virtually all image processing is now done with digital images. These images, captured with digital cameras, can be readily processed with various types of editing software to serve a multitude of personal and commercial purposes. But not all images are directly captured and even of those that are directly captured many are not of sufficiently high quality. Digital images are also acquired by scanning old paper images. The result is often a digital image of poor quality. Textured artifacts on some old paper pictures were designed to help protect pictures from discoloration. However, after scanning, these textured artifacts exhibit annoying textured noise in the digital image, highly degrading the visual definition of images on electronic screens. This kind of image noise is academically called global periodic noise. It is in a spurious and repetitive pattern that exists consistently throughout the image. There does not appear to be any commercial graphic software with a tool box to directly resolve this global periodic noise. Even Photoshop, considered to be the most powerful and authoritative graphic software, does not have an effective function to reduce textured noise. This thesis addresses this problem by proposing the use of an alternative graphic filter to what is currently available. To achieve the best image quality in photographic editing, spatial frequency domain filtering is utilized instead of spatial domain filtering. In frequency domain images, the consistent periodicity of the textured noise leads to well defined spikes in the frequency transform of the noisy image. When the noise spikes are at a sufficient distance from the image spectrum, they can be removed by reducing their frequency amplitudes. The filtered spectrum may then yield a noise reduced image through inverse frequency transforming. This thesis proposes a method to reduce periodic noise in the spatial frequency domain; summarizes the difference between DFT and DCT, FFT and fast DCT in image processing applications; uses fast DCT as the frequency transform to solve the problem in order to improve both computational load and filtered image quality; and develops software that can be implemented as a plug in for large graphic software to remove textured artifacts from digital images.
595

Abstraction and representation of fields and their applications in biomedical modelling

Tsafnat, Guy, Computer Science & Engineering, Faculty of Engineering, UNSW January 2006 (has links)
Computer models are used extensively to investigate biological systems. Many of these systems can be described in terms of fields???spatially- and temporally- varying scalar, vector and tensor properties defined over domains. For example, the spatial variation of muscle fibers is a vector field, the spatial and temporal variation in temperature of an organ is a scalar field, and the distribution of stress across muscle tissue is a tensor field. In this thesis I present my research on how to represent fields in a format that allows researchers to store and distribute them independently of models and to investigate and manipulate them intuitively. I also demonstrate how the work can be applied to solving and analysing biomedical models. To represent fields I created a two-layer system. One layer, called the Field Representation Language (FRL), represents fields by storing numeric, analytic and meta data for storage and distribution. The focus of this layer is efficiency rather than usability. The second layer, called the Abstract Field Layer (AFL), provides an abstraction of fields so that they are easier for researchers to work with. This layer also provides common operations for manipulating fields as well as transparent conversion to and from FRL representations. The applications that I used to demonstrate the use of AFL and FRL are (a) a fields visualisation toolkit, (b) integration of models from different scales and solvers, and (c) a solver that uses AFL internally. The layered architecture facilitated the development of tools that use fields. A similar architecture may also prove useful for representations of other modelled entities.
596

Interactive 3d modelling in outdoor augmented reality worlds

Piekarski, Wayne January 2004 (has links)
This dissertation presents interaction techniques for 3D modelling of large structures in outdoor augmented reality environments. Augmented reality is the process of registering projected computer-generated images over a user's view of the physical world. With the use of a mobile computer, augmented reality can also be experienced in an outdoor environment. Working in a mobile outdoor environment introduces new challenges not previously encountered indoors, requiring the development of new user interfaces to interact with the computer. Current AR systems only support limited interactions and so the complexity of applications that can be developed is also limited. This dissertation describes a number of novel contributions that improve the state of the art in augmented reality technology. / thesis (PhD)--University of South Australia, 2004.
597

Vortex : deferred sort last parallel graphics architecture

Santilli, Abram, University of Western Sydney, College of Health and Science, School of Computing and Mathematics January 2006 (has links)
We have developed a new cluster parallel graphics architecture that improves upon prior cluster parallel graphics systems for high performance supergraphics – the Vortex deferred sort-last parallel graphics architecture based on global Zspace paradigm. The new architecture bypasses limitations of screen-space based parallelization paradigms, solves known Z-space parallelization inefficiencies and problems. Vortex addresses the lack of global Z buffer awareness between GPUs and prevents artifacts in globally order-dependent blending on multiple GPUs. The new paradigm allows for full 1-1 process-GPU coupling with minimal interprocess and inter-GPU communications. This allows for maximal input bandwidth, maximal GPU utilization levels, near optimal load balances, and improved efficiency when scaled to larger configurations. The Vortex architecture introduces the new deferred sort-blend approach for preventing visual artefacts in globally order-dependent fragment blends. All blend fragments are buffered in an external sort-blend subsystem until the end of rendering, when they are Z-culled, sorted and blended into the final frame. The new approach allows for efficient automatic order-independent blending and results in frames without any global blending artifacts The new architecture gives us the ability to fully harness the processing power of state of art GPUs, and at the same time it offers a much easier parallelization paradigm to programmers, compared to existing screen-space CPGS parallelization paradigms. / Doctor of Philosophy (PhD)
598

Filtering, clustering and dynamic layout for graph visualization

Huang, Xiaodi, xhuang@turing.une.edu.au January 2004 (has links)
Graph visualization plays an increasingly important role in software engineering and information systems. Examples include UML, E-R diagrams, database structures, visual programming, web visualization, network protocols, molecular structures, genome diagrams, and social structures. Many classical algorithms for graph visualization have already been developed over the past decades. However, these algorithms face difficulties in practice, such as the overlapping nodes, large graph layout, and dynamic graph layout. In order to solve these problems, this research aims to systematically address both algorithmic and approach issues related to a novel framework that describes the process of graph visualization applications. At the same time, all the proposed algorithms and approaches can be applied to other situations as well. First of all, a framework for graph visualization is described, along with a generic approach to the graphical representation of a relational information source. As the important parts of this framework, two main approaches, Filtering and Clustering, are then particularly investigated to deal with large graph layouts effectively. In order to filter 'noise' or less important nodes in a given graph, two new methods are proposed to compute importance scores of nodes called NodeRank, and then to control the appearances of nodes in a layout by ranking them. Two novel algorithms for clustering graphs, KNN and SKM, are developed to reduce visual complexity. Identifying seed nodes as initial members of clusters, both algorithms make use of either the k-nearest neighbour search or a novel node similarity matrix to seek groups of nodes with most affinities or similarities among them. Such groups of relatively highly connected nodes are then replaced with abstract nodes to form a coarse graph with reduced dimensions. An approach called MMD to the layout of clustered graphs is provided using a multiple-window�multiple-level display. As for the dynamic graph layout, a new approach to removing overlapping nodes called Force-Transfer algorithm is developed to greatly improve the classical Force- Scan algorithm. Demonstrating the performance of the proposed algorithms and approaches, the framework has been implemented in a prototype called PGD. A number of experiments as well as a case study have been carried out.
599

Space subdivision and distributed databases in a multiprocessor raytracer

Cooper, C., n/a January 1991 (has links)
This thesis deals with computer generated images. The thesis begins with an overview of a generalised computer graphics system, including a brief survey of typical methods for generating photorealistic images. One such technique, ray tracing, is used as the basis for the work which follows. The overview section concludes with a statement of the aim which is to: Investigate the effective use of available processing power and effective utilisation of available memory by implementing a ray tracing programme which uses space subdivision, multiple processors and a distributed world model database. The problem formulation section describes the ray tracing principle and then introduces the main areas of study. The INMOS Transputer (a building block for concurrent systems) is used to implement the multiple process ray tracer. Space subdivision is achieved by repeated and regular subdivision of a world cube (which contains the scene to be ray traced) into named cubes, called octrees. The subdivision algorithm continues to subdivide space until no octree contains more than a specified number of objects, or until the practical limit of space subdivision is reached. The objects in the world model database are distributed in a round robin manner to the ray trace processes. During execution of the ray trace programme, information about each object is passed between processes by a message mechanism. The concurrent code for the transputer processes, written in OCCAM 2, was developed using timing diagrams and signal flow diagrams derived by analogy from digital electronics. Structure diagrams, modified to be consistent with OCCAM 2 processes, were derived from the timing diagrams and signal flow diagrams. These were used as a basis for the coding. The results show that space subdivision is an effective use of processor power because the number of trial intersections of rays with objects is dramatically reduced. In addition, distribution of the world model database avoids duplication of the database in the memory of each process and hence better utilisation of available memory is achieved. The programmes are supported by a menu driven interface (running on a PC AT) which enables the user to control the ray trace processes running on the transputer board housed in the PC.
600

Reconstructing 3D geometry from multiple images via inverse rendering.

Bastian, John William January 2008 (has links)
An image is a two-dimensional representation of the three-dimensional world. Recovering the information which is lost in the process of image formation is one of the fundamental problems in Computer Vision. One approach to this problem involves generating and evaluating a succession of surface hypotheses, with the best hypothesis selected as the final estimate. The fitness of each hypothesis can be evaluated by comparing the reference images against synthetic images of the hypothesised surface rendered with the reference cameras. An infinite number of surfaces can recreate any set of reference images, so many approaches to the reconstruction problem recover the largest from this set of surfaces. In contrast, the approach we present here accommodates prior structural information about the scene, thereby reducing ambiguity and finding a reconstruction which reflects the requirements of the user. The user describes structural information by defining a set of primitives and relating them by parameterised transformations. The reconstruction problem then becomes one of estimating the parameter values that transform the primitives such that the hypothesised surface best recreates the reference images. Two appearance-based likelihoods which measure the hypothesised surface against the reference images are described. The first likelihood compares each reference image against an image synthesised from the same viewpoint by rendering a projection of a second image onto the surface. The second likelihood finds the ‘optimal’ surface texture given the hypothesised scene configuration. Not only does this process maximise photo-consistency with respect to all reference images, but it prohibits incorrect reconstructions by allowing the use of prior information about occlusion. The second likelihood is able to reconstruct scenes in cases where the first is biased. / http://proxy.library.adelaide.edu.au/login?url= http://library.adelaide.edu.au/cgi-bin/Pwebrecon.cgi?BBID=1330993 / Thesis (Ph.D.) -- University of Adelaide, School of Computer Science, 2008

Page generated in 0.2398 seconds