• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 150
  • 19
  • 7
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 186
  • 186
  • 186
  • 186
  • 118
  • 49
  • 43
  • 39
  • 29
  • 18
  • 17
  • 16
  • 15
  • 15
  • 14
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

Uitgebreide struktuurgrafiekgrammatikas

Barnard, Andries 20 November 2014 (has links)
M.Sc. (Computer Science) / Please refer to full text to view abstract
162

Représentation et compression à haut niveau sémantique d’images 3D / Representation and compression at high semantic level of 3D images

Samrouth, Khouloud 19 December 2014 (has links)
La diffusion de données multimédia, et particulièrement les images, continuent à croitre de manière très significative. La recherche de schémas de codage efficaces des images reste donc un domaine de recherche très dynamique. Aujourd'hui, une des technologies innovantes les plus marquantes dans ce secteur est sans doute le passage à un affichage 3D. La technologie 3D est largement utilisée dans les domaines de divertissement, d'imagerie médicale, de l'éducation et même plus récemment dans les enquêtes criminelles. Il existe différentes manières de représenter l'information 3D. L'une des plus répandues consiste à associer à une image classique dite de texture, une image de profondeur de champs. Cette représentation conjointe permet ainsi une bonne reconstruction 3D dès lors que les deux images sont bien corrélées, et plus particulièrement sur les zones de contours de l'image de profondeur. En comparaison avec des images 2D classiques, la connaissance de la profondeur de champs pour les images 3D apporte donc une information sémantique importante quant à la composition de la scène. Dans cette thèse, nous proposons un schéma de codage scalable d'images 3D de type 2D + profondeur avec des fonctionnalités avancées, qui préserve toute la sémantique présente dans les images, tout en garantissant une efficacité de codage significative. La notion de préservation de la sémantique peut être traduite en termes de fonctionnalités telles que l'extraction automatique de zones d'intérêt, la capacité de coder plus finement des zones d'intérêt par rapport au fond, la recomposition de la scène et l'indexation. Ainsi, dans un premier temps, nous introduisons un schéma de codage scalable et joint texture/profondeur. La texture est codée conjointement avec la profondeur à basse résolution, et une méthode de compression de la profondeur adaptée aux caractéristiques des cartes de profondeur est proposée. Ensuite, nous présentons un schéma global de représentation fine et de codage basé contenu. Nous proposons ainsi schéma global de représentation et de codage de "Profondeur d'Intérêt", appelé "Autofocus 3D". Il consiste à extraire finement des objets en respectant les contours dans la carte de profondeur, et de se focaliser automatiquement sur une zone de profondeur pour une meilleure qualité de synthèse. Enfin, nous proposons un algorithme de segmentation en régions d'images 3D, fournissant une forte consistance entre la couleur, la profondeur et les régions de la scène. Basé sur une exploitation conjointe de l'information couleurs, et celle de profondeur, cet algorithme permet la segmentation de la scène avec un degré de granularité fonction de l'application visée. Basé sur cette représentation en régions, il est possible d'appliquer simplement le même principe d'Autofocus 3D précédent, pour une extraction et un codage de la profondeur d'Intérêt (DoI). L'élément le plus remarquable de ces deux approches est d'assurer une pleine cohérence spatiale entre texture, profondeur, et régions, se traduisant par une minimisation des problèmes de distorsions au niveau des contours et ainsi par une meilleure qualité dans les vues synthétisées. / Dissemination of multimedia data, in particular the images, continues to grow very significantly. Therefore, developing effective image coding schemes remains a very active research area. Today, one of the most innovative technologies in this area is the 3D technology. This 3D technology is widely used in many domains such as entertainment, medical imaging, education and very recently in criminal investigations. There are different ways of representing 3D information. One of the most common representations, is to associate a depth image to a classic colour image called texture. This joint representation allows a good 3D reconstruction, as the two images are well correlated, especially along the contours of the depth image. Therefore, in comparison with conventional 2D images, knowledge of the depth of field for 3D images provides an important semantic information about the composition of the scene. In this thesis, we propose a scalable 3D image coding scheme for 2D + depth representation with advanced functionalities, which preserves all the semantics present in the images, while maintaining a significant coding efficiency. The concept of preserving the semantics can be translated in terms of features such as an automatic extraction of regions of interest, the ability to encode the regions of interest with higher quality than the background, the post-production of the scene and the indexing. Thus, firstly we introduce a joint and scalable 2D plus depth coding scheme. First, texture is coded jointly with depth at low resolution, and a method of depth data compression well suited to the characteristics of the depth maps is proposed. This method exploits the strong correlation between the depth map and the texture to better encode the depth map. Then, a high resolution coding scheme is proposed in order to refine the texture quality. Next, we present a global fine representation and contentbased coding scheme. Therefore, we propose a representation and coding scheme based on "Depth of Interest", called "3D Autofocus". It consists in a fine extraction of objects, while preserving the contours in the depth map, and it allows to automatically focus on a particular depth zone, for a high rendering quality. Finally, we propose 3D image segmentation, providing a high consistency between colour, depth and regions of the scene. Based on a joint exploitation of the colour and depth information, this algorithm allows the segmentation of the scene with a level of granularity depending on the intended application. Based on such representation of the scene, it is possible to simply apply the same previous 3D Autofocus, for Depth of Interest extraction and coding. It is remarkable that both approaches ensure a high spatial coherence between texture, depth, and regions, allowing to minimize the distortions along object of interest's contours and then a higher quality in the synthesized views.
163

Automatic extraction of bronchus and centerline determination from CT images for three dimensional virtual bronchoscopy.

January 2000 (has links)
Law Tsui Ying. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2000. / Includes bibliographical references (leaves 64-70). / Abstracts in English and Chinese. / Acknowledgments --- p.ii / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Structure of Bronchus --- p.3 / Chapter 1.2 --- Existing Systems --- p.4 / Chapter 1.2.1 --- Virtual Endoscope System (VES) --- p.4 / Chapter 1.2.2 --- Virtual Reality Surgical Simulator --- p.4 / Chapter 1.2.3 --- Automated Virtual Colonoscopy (AVC) --- p.5 / Chapter 1.2.4 --- QUICKSEE --- p.5 / Chapter 1.3 --- Organization of Thesis --- p.6 / Chapter 2 --- Three Dimensional Visualization in Medicine --- p.7 / Chapter 2.1 --- Acquisition --- p.8 / Chapter 2.1.1 --- Computed Tomography --- p.8 / Chapter 2.2 --- Resampling --- p.9 / Chapter 2.3 --- Segmentation and Classification --- p.9 / Chapter 2.3.1 --- Segmentation by Thresholding --- p.10 / Chapter 2.3.2 --- Segmentation by Texture Analysis --- p.10 / Chapter 2.3.3 --- Segmentation by Region Growing --- p.10 / Chapter 2.3.4 --- Segmentation by Edge Detection --- p.11 / Chapter 2.4 --- Rendering --- p.12 / Chapter 2.5 --- Display --- p.13 / Chapter 2.6 --- Hazards of Visualization --- p.13 / Chapter 2.6.1 --- Adding Visual Richness and Obscuring Important Detail --- p.14 / Chapter 2.6.2 --- Enhancing Details Incorrectly --- p.14 / Chapter 2.6.3 --- The Picture is not the Patient --- p.14 / Chapter 2.6.4 --- Pictures-'R'-Us --- p.14 / Chapter 3 --- Overview of Advanced Segmentation Methodologies --- p.15 / Chapter 3.1 --- Mathematical Morphology --- p.15 / Chapter 3.2 --- Recursive Region Search --- p.16 / Chapter 3.3 --- Active Region Models --- p.17 / Chapter 4 --- Overview of Centerline Methodologies --- p.18 / Chapter 4.1 --- Thinning Approach --- p.18 / Chapter 4.2 --- Volume Growing Approach --- p.21 / Chapter 4.3 --- Combination of Mathematical Morphology and Region Growing Schemes --- p.22 / Chapter 4.4 --- Simultaneous Borders Identification Approach --- p.23 / Chapter 4.5 --- Tracking Approach --- p.24 / Chapter 4.6 --- Distance Transform Approach --- p.25 / Chapter 5 --- Automated Extraction of Bronchus Area --- p.27 / Chapter 5.1 --- Basic Idea --- p.27 / Chapter 5.2 --- Outline of the Automated Extraction Algorithm --- p.28 / Chapter 5.2.1 --- Selection of a Start Point --- p.28 / Chapter 5.2.2 --- Three Dimensional Region Growing Method --- p.29 / Chapter 5.2.3 --- Optimization of the Threshold Value --- p.29 / Chapter 5.3 --- Retrieval of Start Point Algorithm Using Genetic Algorithm --- p.29 / Chapter 5.3.1 --- Introduction to Genetic Algorithm --- p.30 / Chapter 5.3.2 --- Problem Modeling --- p.31 / Chapter 5.3.3 --- Algorithm for Determining a Start Point --- p.33 / Chapter 5.3.4 --- Genetic Operators --- p.33 / Chapter 5.4 --- Three Dimensional Painting Algorithm --- p.34 / Chapter 5.4.1 --- Outline of the Three Dimensional Painting Algorithm --- p.34 / Chapter 5.5 --- Optimization of the Threshold Value --- p.36 / Chapter 6 --- Automatic Centerline Determination Algorithm --- p.38 / Chapter 6.1 --- Distance Transformations --- p.38 / Chapter 6.2 --- End Points Retrieval --- p.41 / Chapter 6.3 --- Graph Based Centerline Algorithm --- p.44 / Chapter 7 --- Experiments and Discussion --- p.48 / Chapter 7.1 --- Experiment of Automated Determination of Bronchus Algorithm --- p.48 / Chapter 7.2 --- Experiment of Automatic Centerline Determination Algorithm --- p.54 / Chapter 8 --- Conclusion --- p.62 / Bibliography --- p.63
164

Constraint optimization techniques for graph matching applicable to 3-D object recognition.

January 1996 (has links)
by Chi-Min Pang. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1996. / Includes bibliographical references (leaves 110-[115]). / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Range Images --- p.1 / Chapter 1.2 --- Rigid Body Model --- p.3 / Chapter 1.3 --- Motivation --- p.4 / Chapter 1.4 --- Thesis Outline --- p.6 / Chapter 2 --- Object Recognition by Relaxation Processes --- p.7 / Chapter 2.1 --- An Overview of Probabilistic Relaxation Labelling --- p.8 / Chapter 2.2 --- Formulation of Model-matching Problem Solvable by Probabilistic Relaxation --- p.10 / Chapter 2.2.1 --- Compatibility Coefficient --- p.11 / Chapter 2.2.2 --- Match Score --- p.13 / Chapter 2.2.3 --- Iterative Algorithm --- p.14 / Chapter 2.2.4 --- A Probabilistic Concurrent Matching Scheme --- p.15 / Chapter 2.3 --- Formulation of Model-merging Problem Solvable by Fuzzy Relaxation --- p.17 / Chapter 2.3.1 --- Updating Mechanism --- p.17 / Chapter 2.3.2 --- Iterative Algorithm --- p.19 / Chapter 2.3.3 --- Merging Sub-Rigid Body Models --- p.20 / Chapter 2.4 --- Simulation Results --- p.21 / Chapter 2.4.1 --- Experiments in Model-matching Using Probabilistic Relaxation --- p.22 / Chapter 2.4.2 --- Experiments in Model-matching Using Probabilistic Concur- rent Matching Scheme --- p.26 / Chapter 2.4.3 --- Experiments in Model-merging Using Fuzzy Relaxation --- p.33 / Chapter 2.5 --- Summary --- p.36 / Chapter 3 --- Object Recognition by Hopfield Network --- p.37 / Chapter 3.1 --- An Overview of Hopfield Network --- p.38 / Chapter 3.2 --- Model-matching Problem Solved by Hopfield Network --- p.41 / Chapter 3.2.1 --- Representation of the Solution --- p.41 / Chapter 3.2.2 --- Energy Function --- p.42 / Chapter 3.2.3 --- Equations of Motion --- p.46 / Chapter 3.2.4 --- Interpretation of Solution --- p.49 / Chapter 3.2.5 --- Convergence of the Hopfield Network --- p.50 / Chapter 3.2.6 --- Iterative Algorithm --- p.51 / Chapter 3.3 --- Estimation of Distance Threshold Value --- p.53 / Chapter 3.4 --- Cooperative Concurrent Matching Scheme --- p.55 / Chapter 3.4.1 --- Scheme for Recognizing a Single Object --- p.56 / Chapter 3.4.2 --- Scheme for Recognizing Multiple Objects --- p.60 / Chapter 3.5 --- Simulation Results --- p.60 / Chapter 3.5.1 --- Experiments in the Model-matching Problem Using a Hopfield Network --- p.61 / Chapter 3.5.2 --- Experiments in Model-matching Problem Using Cooperative Concurrent Matching --- p.69 / Chapter 3.5.3 --- Experiments in Model-merging Problem Using Hopfield Network --- p.77 / Chapter 3.6 --- Summary --- p.80 / Chapter 4 --- Genetic Generation of Weighting Parameters for Hopfield Network --- p.83 / Chapter 4.1 --- An Overview of Genetic Algorithms --- p.84 / Chapter 4.2 --- Determination of Weighting Parameters for Hopfield Network --- p.86 / Chapter 4.2.1 --- Chromosomal Representation --- p.87 / Chapter 4.2.2 --- Initial Population --- p.88 / Chapter 4.2.3 --- Evaluation Function --- p.88 / Chapter 4.2.4 --- Genetic Operators --- p.89 / Chapter 4.2.5 --- Control Parameters --- p.91 / Chapter 4.2.6 --- Iterative Algorithm --- p.94 / Chapter 4.3 --- Simulation Results --- p.95 / Chapter 4.3.1 --- Experiments in Model-matching Problem using Hopfield Net- work with Genetic Generated Parameters --- p.95 / Chapter 4.3.2 --- Experiments in Model-merging Problem Using Hopfield Network --- p.101 / Chapter 4.4 --- Summary --- p.104 / Chapter 5 --- Conclusions --- p.106 / Chapter 5.1 --- Conclusions --- p.106 / Chapter 5.2 --- Suggestions for Future Research --- p.109 / Bibliography --- p.110 / Chapter A --- Proof of Convergence of Fuzzy Relaxation Process --- p.116
165

Practical Structural Design and Control for Digital Clay

Zhu, Haihong 20 July 2005 (has links)
Digital Clay is a next generation human-machine communication interface based on a tangible haptic surface. This thesis embraces this revolutionary concept and seeks to give it a physical embodiment that will confirm its feasibility and enable experimentation relating to its utility and possible improvements. Per the approach adopted in work, Digital Clay could be described as a 3D monitor whose pixels can move perpendicularly to the screen to form a morphing surface. Users can view, touch and modify the shape of the working surface formed by these pixels. In reality, the pixels are the tips of micro hydraulic actuators or Hapcel (i.e. haptic cell, since the Digital Clay supports the haptic interface). The user can get a feel of the desired material properties when he/she touches the working surface. The potential applications of Digital Clay cover a wide range from computer aided engineering design to scientific research to medical diagnoses, 3D dynamic mapping and entertainment. One could predict a future in which, by using Digital Clay, not only could the user watch an actor in a movie, but also touch the face of the actor! This research starts from the review of the background of virtual reality. Then the concept and features of the proposed Digital Clay is provided. Research stages and a 5x5 cell array prototype are presented in this thesis on the structural design and control of Digital Clay. The first stage of the research focuses on the design and control of a single cell system of Digital Clay. Control issues of a single cell system constructed using conventional and off-the-shelf components are discussed first in detail followed by experimental results. Then practical designs of micro actuators and sensors are presented. The second stage of the research deals with the cell array system of Digital Clay. Practical structural design and control methods are discussed which are suitable for a 100x 100 (even 1000X 1000) cell array. Conceptual design and detailed implementations are presented. Finally, a 5 x 5 cell array prototype constructed using the discussed design solutions for testing is presented.
166

Simulation and Fabrication of a Formable Surface for the Digital Clay Haptic Device

Anderson, Theodore E. 27 February 2007 (has links)
A formable surface is part of an effort to create a haptic device that allows for a three dimensional human-computer interface called digital clay. As with real clay, digital clay allows a user to physically manipulate the surface into some form or orientation that is sensed and directly represented in a computer model. Furthermore, digital clay will allow a user to change the computer model by manipulating the inputs that are directly represented in the physical model. The digital clay device being researched involves a computer-interfaced array of vertically displacing actuators that is bound by a formable surface. The surface is composed of an array of unit cells that are constructed of compliant spherical joints and translational joints. As part of this thesis, a series of unit cells were developed and planar surfaces were fabricated utilizing the additive manufacturing process of stereolithography. The process of computing the resultant shape of a manipulated surface was modeled mathematically through energy minimization algorithms that utilized least squares analysis to compute the positions of the unit cells of the surface. Simulation results were computed and analyzed against the movement of a fabricated planar surface. Once the mathematical models were validated against the manufactured surface, a method for attaching the surface to an array of actuators was recommended.
167

Depth-based 3D videos: quality measurement and synthesized view enhancement

Solh, Mashhour M. 13 December 2011 (has links)
Three dimensional television (3DTV) is believed to be the future of television broadcasting that will replace current 2D HDTV technology. In the future, 3DTV will bring a more life-like and visually immersive home entertainment experience, in which users will have the freedom to navigate through the scene to choose a different viewpoint. A desired view can be synthesized at the receiver side using depth image-based rendering (DIBR). While this approach has many advantages, one of the key challenges in DIBR is generating high quality synthesized views. This work presents novel methods to measure and enhance the quality of 3D videos generated through DIBR. For quality measurements we describe a novel method to characterize and measure distortions by multiple cameras used to capture stereoscopic images. In addition, we present an objective quality measure for DIBR-based 3D videos by evaluating the elements of visual discomfort in stereoscopic 3D videos. We also introduce a new concept called the ideal depth estimate, and define the tools to estimate that depth. Full-reference and no-reference profiles for calculating the proposed measures are also presented. Moreover, we introduce two innovative approaches to improve the quality of the synthesized views generated by DIBR. The first approach is based on hierarchical blending of the background and foreground information around the disocclusion areas which produces a natural looking, synthesized view with seamless hole-filling. This approach yields virtual images that are free of any geometric distortions, unlike other algorithms that preprocess the depth map. In contrast to the other hole-filling approaches, our approach is not sensitive to depth maps with high percentage of bad pixels from stereo matching. The second approach further enhances the results through a depth-adaptive preprocessing of the colored images. Finally, we propose an enhancement over depth estimation algorithm using the depth monocular cues from luminance and chrominance. The estimated depth will be evaluated using our quality measure, and the hole-filling algorithm will be used to generate synthesized views. This application will demonstrate how our quality measures and enhancement algorithms could help in the development of high quality stereoscopic depth-based synthesized videos.
168

Vlist and Ering: compact data structures for simplicial 2-complexes

Zhu, Xueyun 13 January 2014 (has links)
Various data structures have been proposed for representing the connectivity of manifold triangle meshes. For example, the Extended Corner Table (ECT) stores V+6T references, where V and T respectively denote the vertex and triangle counts. ECT supports Random Access and Traversal (RAT) operators at Constant Amortized Time (CAT) cost. We propose two novel variations of ECT that also support RAT operations at CAT cost, but can be used to represent and process Simplicial 2-Complexes (S2Cs), which may represent star-connecting, non-orientable, and non-manifold triangulations along with dangling edges, which we call sticks. Vlist stores V+3T+3S+3(C+S-N) references, where S denotes the stick count, C denotes the number of edge-connected components and N denotes the number of star-connecting vertices. Ering stores 6T+3S+3(C+S-N) references, but has two advantages over Vlist: the Ering implementation of the operators is faster and is purely topological (i.e., it does not perform geometric queries). Vlist and Ering representations have two principal advantages over previously proposed representations for simplicial complexes: (1) Lower storage cost, at least for meshes with significantly more triangles than sticks, and (2) explicit support of side-respecting traversal operators which each walks from a corner on the face of a triangle t across an edge or a vertex of t, to a corner on a faces of a triangle or to an end of a stick that share a vertex with t, and this without ever piercing through the surface of a triangle.
169

A reconfigurable tactile display based on polymer MEMS technology

Wu, Xiaosong 25 March 2008 (has links)
This research focuses on the development of polymer microfabrication technologies for the realization of two major components of a pneumatic tactile display: a microactuator array and a complementary microvalve (control) array. The concept, fabrication, and characterization of a kinematically-stabilized polymeric microbubble actuator (¡°endoskeletal microbubble actuator¡±) were presented. A systematic design and modeling procedure was carried out to generate an optimized geometry of the corrugated diaphragm to satisfy membrane deflection, force, and stability requirements set forth by the tactile display goals. A refreshable Braille cell as a tactile display prototype has been developed based on a 2x3 endoskeletal microbubble array and an array of commercial valves. The prototype can provide both a static display (which meets the displacement and force requirement of a Braille display) and vibratory tactile sensations. Along with the above capabilities, the device was designed to meet the criteria of lightness and compactness to permit portable operation. The design is scalable with respect to the number of tactile actuators while still being simple to fabricate. In order to further reduce the size and cost of the tactile display, a microvalve array can be integrated into the tactile display system to control the pneumatic fluid that actuates the microbubble actuator. A piezoelectrically-driven and hydraulically-amplified polymer microvalve has been designed, fabricated, and tested. An incompressible elastomer was used as a solid hydraulic medium to convert the small axial displacement of a piezoelectric actuator into a large valve head stroke while maintaining a large blocking force. The function of the microvalve as an on-off switch for a pneumatic microbubble tactile actuator was demonstrated. To further reduce the cost of the microvalve, a laterally-stacked multilayer PZT actuator has been fabricated using diced PZT multilayer, high aspect ratio SU-8 photolithography, and molding of electrically conductive polymer composite electrodes.
170

Efficient 3D scene modeling and mosaicing

Nicosevici, Tudor 18 December 2009 (has links)
El modelat d'escenes és clau en un gran ventall d'aplicacions que van des de la generació mapes fins a la realitat augmentada. Aquesta tesis presenta una solució completa per a la creació de models 3D amb textura. En primer lloc es presenta un mètode de Structure from Motion seqüencial, a on el model 3D de l'entorn s'actualitza a mesura que s'adquireix nova informació visual. La proposta és més precisa i robusta que l'estat de l'art. També s'ha desenvolupat un mètode online, basat en visual bag-of-words, per a la detecció eficient de llaços. Essent una tècnica completament seqüencial i automàtica, permet la reducció de deriva, millorant la navegació i construcció de mapes. Per tal de construir mapes en àrees extenses, es proposa un algorisme de simplificació de models 3D, orientat a aplicacions online. L'eficiència de les propostes s'ha comparat amb altres mètodes utilitzant diversos conjunts de dades submarines i terrestres. / Scene modeling has a key role in applications ranging from visual mapping to augmented reality. This thesis presents an end-to-end solution for creating accurate, automatic 3D textured models, with contributions at different levels. First, we discuss a method developed within the framework of sequential Structure from Motion, where a 3D model of the environment is maintained and updated as visual information becomes available. The technique is more accurate and robust than state-of-the-art 3D modeling approaches. We also develop an online effcient loop-closure detection algorithm, allowing the reduction of drift and uncertainties for mapping and navigation. Inspired from visual bag-of-words, the technique is entirely sequential and automatic. Lastly, motivated by the need to map large areas, we propose a 3D model simplification oriented towards online applications. We discuss the efficiency of the proposals and compare them with state-of-the-art approaches, using a series of challenging datasets both in underwater and outdoor scenarios.

Page generated in 0.146 seconds