• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2656
  • 782
  • 758
  • 243
  • 184
  • 156
  • 135
  • 45
  • 35
  • 27
  • 24
  • 24
  • 24
  • 24
  • 24
  • Tagged with
  • 6261
  • 6261
  • 2005
  • 1525
  • 1195
  • 1149
  • 1028
  • 1001
  • 952
  • 927
  • 895
  • 799
  • 771
  • 661
  • 660
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
341

2D and 3D shape descriptors

Martinez-Ortiz, Carlos A. January 2010 (has links)
The field of computer vision studies the computational tools and methods required for computers to be able to process visual information, for example images and video. Shape descriptors are one of the tools commonly used in image processing applications. Shape descriptors are mathematical functions which are applied to an image and produce numerical values which are representative of a particular characteristic of the image. These numerical values can then be processed in order to provide some information about the image. For example, these values can be fed to a classifier in order to assign a class label to the image. There are a number of shape descriptors already existing in the literature for 2D and 3D images. The aim of this thesis is to develop additional shape descriptors which provide an improvement over (or an alternative to) those already existing in the literature. A large majority of the existing 2D shape descriptors use surface information to produce a measure. However, in some applications surface information is not present and only partially extracted contours are available. In such cases, boundary based shape descriptors must be used. A new boundary based shape descriptor called Linearity is introduced. This measure can be applied to open or closed curve segments. In general the availability of 3D images is comparatively smaller than that of 2D images. As a consequence, the number of existing 3D shape descriptors is also relatively smaller. However, there is an increasing interest in the development of 3D descriptors. In this thesis we present two basic 3D measures which afterwards are modified to produce a range of new shape descriptors. All of these descriptors are similar in their behaviour, however they can be combined and applied in different image processing applications such as image retrieval and classification. This simple fact is demonstrated through several examples.
342

Novel entropy coding and its application of the compression of 3D image and video signals

Amal, Mehanna January 2013 (has links)
The broadcast industry is moving future Digital Television towards Super high resolution TV (4k or 8k) and/or 3D TV. This ultimately will increase the demand on data rate and subsequently the demand for highly efficient codecs. One of the techniques that researchers found it one of the promising technologies in the industry in the next few years is 3D Integral Image and Video due to its simplicity and mimics the reality, independently on viewer aid, one of the challenges of the 3D Integral technology is to improve the compression algorithms to adequate the high resolution and exploit the advantages of the characteristics of this technology. The research scope of this thesis includes designing a novel coding for the 3D Integral image and video compression. Firstly to address the compression of 3D Integral imaging the research proposes novel entropy coding which will be implemented first on 2D traditional images content in order to compare it with the other traditional common standards then will be applied on 3D Integra image and video. This approach seeks to achieve high performance represented by high image quality and low bit rate in association with low computational complexity. Secondly, new algorithm will be proposed in an attempt to improve and develop the transform techniques performance, initially by using a new adaptive 3D-DCT algorithm then by proposing a new hybrid 3D DWT-DCT algorithm via exploiting the advantages of each technique and get rid of the artifact that each technique of them suffers from. Finally, the proposed entropy coding will be further implemented to the 3D integral video in association with another proposed algorithm that based on calculating the motion vector on the average viewpoint for each frame. This approach seeks to minimize the complexity and reduce the speed without affecting the Human Visual System (HVS) performance. Number of block matching techniques will be used to investigate the best block matching technique that is adequate for the new proposed 3D integral video algorithm.
343

Investigation of Different Video Compression Schemes Using Neural Networks

Kovvuri, Prem 20 January 2006 (has links)
Image/Video compression has great significance in the communication of motion pictures and still images. The need for compression has resulted in the development of various techniques including transform coding, vector quantization and neural networks. this thesis neural network based methods are investigated to achieve good compression ratios while maintaining the image quality. Parts of this investigation include motion detection, and weight retraining. An adaptive technique is employed to improve the video frame quality for a given compression ratio by frequently updating the weights obtained from training. More specifically, weight retraining is performed only when the error exceeds a given threshold value. Image quality is measured objectively, using the peak signal-to-noise ratio versus performance measure. Results show the improved performance of the proposed architecture compared to existing approaches. The proposed method is implemented in MATLAB and the results obtained such as compression ratio versus signalto- noise ratio are presented.
344

IR-Based Indoor Localisation and Positioning System

Agmell, Simon, Dekker, Marcus January 2019 (has links)
This thesis presents a prototype beacon-based indoor positioning system using IR-based triangulation together with various inertial sensors mounted onto the receiver. By applying a Kalman filter, the mobile receivers can estimate their position by fusing the data received from the two independent measurement systems. Furthermore, the system is aimed to operate and conduct all calculations using microcontrollers. Multiple IR beacons and an AGV were constructed to determine the systems performance. Empirical and practical experiments show that the proposed localisation system is capable centimeter accuracy. However, because of hardware limitation the system has lacking update frequency and range. With the limitations in mind, it can be established that the final sensor-fused solution shows great promise but requires an extended component assessment and more advanced localisation estimations method such as an Extended Kalman Filter or particle filter to increase reliability.
345

The technology of image processing and its application in the business world.

January 1991 (has links)
by Choy Ho Yuk, Anthony & Lo Shin Sing, Samuel. / Thesis (M.B.A.)--Chinese University of Hong Kong, 1991. / Bibliography: leaves 80-82. / ABSTRACT --- p.ii / TABLE OF CONTENTS --- p.iii / LIST OF ILLUSTRATIONS --- p.v / Chapter / Chapter I . --- INTRODUCTION --- p.1 / Evolution of Image Processing --- p.1 / Scope and Statement of the Problem --- p.2 / Methodology --- p.4 / Chapter II. --- TECHNOLOGY OF IMAGE PROCESSING --- p.6 / Concept of Image --- p.6 / What is Image --- p.6 / Image as Non-coded Information --- p.10 / Types of Image --- p.10 / General Flow of Image Processing --- p.11 / Storage of Image Documents --- p.14 / Electromagnetic devices --- p.15 / Optical Disk and Juke Boxes --- p.15 / Storage Management --- p.18 / Image Management System --- p.20 / Image Communication --- p.22 / Chapter III. --- APPLICATIONS OF IMAGE PROCESSING --- p.25 / How to Implement an Image Processing System --- p.25 / Feasibility Study --- p.25 / Implementation Stages --- p.27 / Benefits of Image Processing --- p.31 / Storage --- p.31 / Document Organization --- p.32 / Data Security --- p.32 / Data Integrity --- p.33 / Document Retrieval and Workflow Management --- p.33 / Concurrency --- p.34 / Issues of Image Processing --- p.35 / Cost Justification --- p.35 / Paper Storage Elimination --- p.37 / Data Conversion --- p.38 / Legal acceptance of Image Document --- p.39 / Environment suitable for Image Processing --- p.40 / Banks --- p.40 / Hospitals --- p.41 / Insurance Companies --- p.42 / USAA Image Processing Case Study --- p.43 / Chapter IV. --- INTEGRATION OF IMAGE PROCESSING WITH OTHER TECHNOLOGY --- p.47 / Interface with Data Processing --- p.47 / Interface with Microfilm --- p.48 / Input --- p.48 / Storage Media --- p.49 / Output --- p.50 / Software --- p.51 / Comparison between Microfilm and Optical Disk --- p.52 / Integration of Microfilm with Optical Disk --- p.56 / Interface with Facsimile --- p.57 / Chapter V. --- EVALUATION OF EXISTING IMAGING SYSTEMS AND PRODUCTS --- p.59 / Image Management Systems --- p.59 / Wang Integrated Image Systems --- p.60 / IBM Imageplus --- p.61 / Philips Megadoc --- p.62 / Scanners --- p.63 / Wang Laboratories --- p.64 / Ricoh --- p.65 / Optical Disks --- p.66 / Storage Dimensions --- p.67 / Wang Laboratories --- p.68 / Limitation of Optical Disk Systems --- p.68 / Printers --- p.69 / Wang Laboratories --- p.70 / IBM --- p.71 / Workstations --- p.71 / Wang PC 200/300 Series Image Workstation --- p.72 / IBM PS/2 Imageplus Workstation --- p.73 / Chapter VI. --- CONCLUSION --- p.75 / BIBLIOGRAPHY --- p.80
346

High efficiency block coding techniques for image data.

January 1992 (has links)
by Lo Kwok-tung. / Thesis (Ph.D.)--Chinese University of Hong Kong, 1992. / Includes bibliographical references. / ABSTRACT --- p.i / ACKNOWLEDGEMENTS --- p.iii / LIST OF PRINCIPLE SYMBOLS AND ABBREVIATIONS --- p.iv / LIST OF FIGURES --- p.vii / LIST OF TABLES --- p.ix / TABLE OF CONTENTS --- p.x / Chapter CHAPTER 1 --- Introduction / Chapter 1.1 --- Background - The Need for Image Compression --- p.1-1 / Chapter 1.2 --- Image Compression - An Overview --- p.1-2 / Chapter 1.2.1 --- Predictive Coding - DPCM --- p.1-3 / Chapter 1.2.2 --- Sub-band Coding --- p.1-5 / Chapter 1.2.3 --- Transform Coding --- p.1-6 / Chapter 1.2.4 --- Vector Quantization --- p.1-8 / Chapter 1.2.5 --- Block Truncation Coding --- p.1-10 / Chapter 1.3 --- Block Based Image Coding Techniques --- p.1-11 / Chapter 1.4 --- Goal of the Work --- p.1-13 / Chapter 1.5 --- Organization of the Thesis --- p.1-14 / Chapter CHAPTER 2 --- Block-Based Image Coding Techniques / Chapter 2.1 --- Statistical Model of Image --- p.2-1 / Chapter 2.1.1 --- One-Dimensional Model --- p.2-1 / Chapter 2.1.2 --- Two-Dimensional Model --- p.2-2 / Chapter 2.2 --- Image Fidelity Criteria --- p.2-3 / Chapter 2.2.1 --- Objective Fidelity --- p.2-3 / Chapter 2.2.2 --- Subjective Fidelity --- p.2-5 / Chapter 2.3 --- Transform Coding Theroy --- p.2-6 / Chapter 2.3.1 --- Transformation --- p.2-6 / Chapter 2.3.2 --- Quantization --- p.2-10 / Chapter 2.3.3 --- Coding --- p.2-12 / Chapter 2.3.4 --- JPEG International Standard --- p.2-14 / Chapter 2.4 --- Vector Quantization Theory --- p.2-18 / Chapter 2.4.1 --- Codebook Design and the LBG Clustering Algorithm --- p.2-20 / Chapter 2.5 --- Block Truncation Coding Theory --- p.2-22 / Chapter 2.5.1 --- Optimal MSE Block Truncation Coding --- p.2-24 / Chapter CHAPTER 3 --- Development of New Orthogonal Transforms / Chapter 3.1 --- Introduction --- p.3-1 / Chapter 3.2 --- Weighted Cosine Transform --- p.3-4 / Chapter 3.2.1 --- Development of the WCT --- p.3-6 / Chapter 3.2.2 --- Determination of a and β --- p.3-9 / Chapter 3.3 --- Simplified Cosine Transform --- p.3-10 / Chapter 3.3.1 --- Development of the SCT --- p.3-11 / Chapter 3.4 --- Fast Computational Algorithms --- p.3-14 / Chapter 3.4.1 --- Weighted Cosine Transform --- p.3-14 / Chapter 3.4.2 --- Simplified Cosine Transform --- p.3-18 / Chapter 3.4.3 --- Computational Requirement --- p.3-19 / Chapter 3.5 --- Performance Evaluation --- p.3-21 / Chapter 3.5.1 --- Evaluation using Statistical Model --- p.3-21 / Chapter 3.5.2 --- Evaluation using Real Images --- p.3-28 / Chapter 3.6 --- Concluding Remarks --- p.3-31 / Chapter 3.7 --- Note on Publications --- p.3-32 / Chapter CHAPTER 4 --- Pruning in Transform Coding of Images / Chapter 4.1 --- Introduction --- p.4-1 / Chapter 4.2 --- "Direct Fast Algorithms for DCT, WCT and SCT" --- p.4-3 / Chapter 4.2.1 --- Discrete Cosine Transform --- p.4-3 / Chapter 4.2.2 --- Weighted Cosine Transform --- p.4-7 / Chapter 4.2.3 --- Simplified Cosine Transform --- p.4-9 / Chapter 4.3 --- Pruning in Direct Fast Algorithms --- p.4-10 / Chapter 4.3.1 --- Discrete Cosine Transform --- p.4-10 / Chapter 4.3.2 --- Weighted Cosine Transform --- p.4-13 / Chapter 4.3.3 --- Simplified Cosine Transform --- p.4-15 / Chapter 4.4 --- Operations Saved by Using Pruning --- p.4-17 / Chapter 4.4.1 --- Discrete Cosine Transform --- p.4-17 / Chapter 4.4.2 --- Weighted Cosine Transform --- p.4-21 / Chapter 4.4.3 --- Simplified Cosine Transform --- p.4-23 / Chapter 4.4.4 --- Generalization Pruning Algorithm for DCT --- p.4-25 / Chapter 4.5 --- Concluding Remarks --- p.4-26 / Chapter 4.6 --- Note on Publications --- p.4-27 / Chapter CHAPTER 5 --- Efficient Encoding of DC Coefficient in Transform Coding Systems / Chapter 5.1 --- Introduction --- p.5-1 / Chapter 5.2 --- Minimum Edge Difference (MED) Predictor --- p.5-3 / Chapter 5.3 --- Performance Evaluation --- p.5-6 / Chapter 5.4 --- Simulation Results --- p.5-9 / Chapter 5.5 --- Concluding Remarks --- p.5-14 / Chapter 5.6 --- Note on Publications --- p.5-14 / Chapter CHAPTER 6 --- Efficient Encoding Algorithms for Vector Quantization of Images / Chapter 6.1 --- Introduction --- p.6-1 / Chapter 6.2 --- Sub-Codebook Searching Algorithm (SCS) --- p.6-4 / Chapter 6.2.1 --- Formation of the Sub-codebook --- p.6-6 / Chapter 6.2.2 --- Premature Exit Conditions in the Searching Process --- p.6-8 / Chapter 6.2.3 --- Sub-Codebook Searching Algorithm --- p.6-11 / Chapter 6.3 --- Predictive Sub-Codebook Searching Algorithm (PSCS) --- p.6-13 / Chapter 6.4 --- Simulation Results --- p.6-17 / Chapter 6.5 --- Concluding Remarks --- p.5-20 / Chapter 6.6 --- Note on Publications --- p.6-21 / Chapter CHAPTER 7 --- Predictive Classified Address Vector Quantization of Images / Chapter 7.1 --- Introduction --- p.7-1 / Chapter 7.2 --- Optimal Three-Level Block Truncation Coding --- p.7-3 / Chapter 7.3 --- Predictive Classified Address Vector Quantization --- p.7-5 / Chapter 7.3.1 --- Classification of Images using Three-level BTC --- p.7-6 / Chapter 7.3.2 --- Predictive Mean Removal Technique --- p.7-8 / Chapter 7.3.3 --- Simplified Address VQ Technique --- p.7-9 / Chapter 7.3.4 --- Encoding Process of PCAVQ --- p.7-13 / Chapter 7.4 --- Simulation Results --- p.7-14 / Chapter 7.5 --- Concluding Remarks --- p.7-18 / Chapter 7.6 --- Note on Publications --- p.7-18 / Chapter CHAPTER 8 --- Recapitulation and Topics for Future Investigation / Chapter 8.1 --- Recapitulation --- p.8-1 / Chapter 8.2 --- Topics for Future Investigation --- p.8-3 / REFERENCES --- p.R-1 / APPENDICES / Chapter A. --- Statistics of Monochrome Test Images --- p.A-l / Chapter B. --- Statistics of Color Test Images --- p.A-2 / Chapter C. --- Fortran Program Listing for the Pruned Fast DCT Algorithm --- p.A-3 / Chapter D. --- Training Set Images for Building the Codebook of Standard VQ Scheme --- p.A-5 / Chapter E. --- List of Publications --- p.A-7
347

An Image access protocol: design, implementation and services.

January 1992 (has links)
by Kong Tat Cheong. / Thesis (M.Sc.)--Chinese University of Hong Kong, 1992. / Includes bibliographical references (leaves 64-65). / Chapter CHAPTER 1. --- INTRODUCTION --- p.1 / Chapter CHAPTER 2. --- RECENT RESEARCH REVIEW --- p.4 / Chapter CHAPTER 3. --- IMAGE ACCESS PROTOCOL --- p.8 / Chapter 3.1 --- Design Principles --- p.11 / Chapter 3.2 --- Protocol Mechanism --- p.16 / Chapter 3.3 --- IAP Packet Formats --- p.20 / Chapter 3.4 --- Protocol Operation Example --- p.28 / Chapter CHAPTER 4. --- SYSTEM IMPLEMENTATION --- p.31 / Chapter 4.1 --- Software Architecture and Interfaces --- p.33 / Chapter 4.2 --- System Operations and Applications --- p.43 / Chapter 4.3 --- Image Transmission Efficiency --- p.48 / Chapter CHAPTER 5. --- ENHANCED SYSTEM SERVICES --- p.51 / Chapter 5.1 --- Progressive Coding --- p.51 / Chapter 5.2 --- Call Management --- p.56 / Chapter 5.3 --- Priority Control --- p.57 / Chapter 5.4 --- Concurrent Control --- p.58 / Chapter CHAPTER 6. --- CONCLUSION --- p.59 / Chapter APPENDIX 1. --- APPLICATION PROGRAMMING INTERFACE --- p.61 / REFERENCE --- p.64
348

A Cooperative algorithm for stereo disparity computation.

January 1991 (has links)
by Or Siu Hang. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1991. / Bibliography: leaves [102]-[105]. / Acknowledgements --- p.V / Chapter Chapter 1 --- Introduction / Chapter 1.1 --- The problem --- p.1 / Chapter 1.1.1 --- The correspondence problem --- p.5 / Chapter 1.1.2 --- The problem of surface reconstruction --- p.6 / Chapter 1.2 --- Our goal --- p.8 / Chapter 1.3 --- Previous works --- p.8 / Chapter 1.3.1 --- Constraints on matching --- p.10 / Chapter 1.3.2 --- Interpolation of disparity surfaces --- p.12 / Chapter Chapter 2 --- Preprocessing of images / Chapter 2.1 --- Which operator to use --- p.14 / Chapter 2.2 --- Directional zero-crossing --- p.14 / Chapter 2.3 --- Laplacian of Gaussian --- p.16 / Chapter 2.3.1 --- Theoretical background of the Laplacian of Gaussian --- p.18 / Chapter 2.3.2 --- Implementation of the operator --- p.21 / Chapter Chapter 3 --- Disparity Layers Generation / Chapter 3.1 --- Geometrical constraint --- p.23 / Chapter 3.2 --- Basic idea of disparity layer --- p.26 / Chapter 3.3 --- Consideration in matching --- p.28 / Chapter 3.4 --- effect of vertical misalignment of sensor --- p.37 / Chapter 3.5 --- Final approach --- p.39 / Chapter Chapter 4 --- Disparity combination / Chapter 4.1 --- Ambiguous match from different layers --- p.52 / Chapter 4.2 --- Our approach --- p.54 / Chapter Chapter 5 --- Generation of dense disparity map / Chapter 5.1 --- Introduction --- p.58 / Chapter 5.2 --- Cooperative computation --- p.58 / Chapter 5.2.1 --- Formulation of oscillation algorithm --- p.59 / Chapter 5.3 --- Interpolation by Gradient descent method --- p.69 / Chapter 5.3.1 --- Formulation of constraints --- p.70 / Chapter 5.3.2 --- Gradient projection interpolation algorithm --- p.72 / Chapter 5.3.3 --- Implementation of the algorithm --- p.78 / Chapter Chapter 6 --- Conclusion --- p.89 / Reference / Appendix (Dynamical behavior of the cooperative algorithm)
349

Calibration of stereo images using OTV correspondences.

January 1994 (has links)
by Sai-kee Wong. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1994. / Includes bibliographical references (leaves 74-77). / Acknowledgments --- p.ii / List Of Figures --- p.v / List Of Tables --- p.vii / Abstract --- p.viii / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Motivation --- p.1 / Chapter 1.2 --- Objective of the Study --- p.2 / Chapter 1.3 --- Our Approach --- p.3 / Chapter 1.4 --- Original Contributions --- p.4 / Chapter 1.5 --- Organization of this Dissertation --- p.5 / Chapter 2 --- Previous Work --- p.6 / Chapter 2.1 --- Absolution orientation approach --- p.6 / Chapter 2.2 --- Relative orientation approach --- p.7 / Chapter 3 --- Calibration using OTV correspondences --- p.12 / Chapter 3.1 --- Problem Statement --- p.12 / Chapter 3.2 --- Recovering the orientation of an OTV from a single view --- p.14 / Chapter 3.3 --- Recovering the transformation parameters between two views --- p.18 / Chapter 3.3.1 --- Recovering R --- p.19 / Chapter 3.3.2 --- Recovering t --- p.20 / Chapter 3.3.3 --- Summary of all the steps --- p.20 / Chapter 3.3.4 --- Recovering R and t using more than 2 OTVs --- p.21 / Chapter 4 --- Experimental Results --- p.23 / Chapter 4.1 --- Simulated Data Experiments --- p.23 / Chapter 4.1.1 --- Error versus the smallest angle among the projected branches of an OTV --- p.23 / Chapter 4.1.2 --- Comparison with a point correspondence algorithm --- p.24 / Chapter 4.2 --- Real Image Experiment --- p.41 / Chapter 5 --- Error Analysis --- p.52 / Chapter 5.1 --- Translation in x only --- p.55 / Chapter 5.1.1 --- Jacobian Matrix on Rotation --- p.56 / Chapter 5.1.2 --- Jacobian Matrix on Translation --- p.57 / Chapter 5.2 --- "Rotation + translation in x, y, z" --- p.60 / Chapter 5.2.1 --- Jacobian Matrix on Rotation --- p.60 / Chapter 5.2.2 --- Jacobian Matrix on Translation --- p.61 / Chapter 5.3 --- "Rotation + translation in x,y" --- p.64 / Chapter 5.3.1 --- Jacobian Matrix on Rotation --- p.65 / Chapter 5.3.2 --- Jacobian Matrix on Translation --- p.65 / Chapter 6 --- Conclusion and Future work --- p.68 / Chapter Appendix A --- Least-squares Approximation of a set of Rotation Matrices --- p.70 / Chapter Appendix B --- Epipolar Lines independent of the Translation Magnitude --- p.72
350

Managing imbalanced training data by sequential segmentation in machine learning

Bardolet Pettersson, Susana January 2019 (has links)
Imbalanced training data is a common problem in machine learning applications. Thisproblem refers to datasets in which the foreground pixels are significantly fewer thanthe background pixels. By training a machine learning model with imbalanced data, theresult is typically a model that classifies all pixels as the background class. A result thatindicates no presence of a specific condition when it is actually present is particularlyundesired in medical imaging applications. This project proposes a sequential system oftwo fully convolutional neural networks to tackle the problem. Semantic segmentation oflung nodules in thoracic computed tomography images has been performed to evaluate theperformance of the system. The imbalanced data problem is present in the training datasetused in this project, where the average percentage of pixels belonging to the foregroundclass is 0.0038 %. The sequential system achieved a sensitivity of 83.1 % representing anincrease of 34 % compared to the single system. The system only missed 16.83% of thenodules but had a Dice score of 21.6 % due to the detection of multiple false positives. Thismethod shows considerable potential to be a solution to the imbalanced data problem withcontinued development.

Page generated in 0.091 seconds