• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 406
  • 309
  • 42
  • 35
  • 15
  • 12
  • 10
  • 10
  • 10
  • 10
  • 6
  • 5
  • 5
  • 5
  • 5
  • Tagged with
  • 944
  • 345
  • 145
  • 145
  • 128
  • 107
  • 96
  • 91
  • 87
  • 85
  • 80
  • 67
  • 66
  • 66
  • 60
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
321

Recovery of the logical gravity field by spherical regularization wavelets approximation and its numerical implementation

Shuler, Harrey Jeong 29 April 2014 (has links)
As an alternative to spherical harmonics in modeling the gravity field of the Earth, we built a multiresolution gravity model by employing spherical regularization wavelets in solving the inverse problem, i.e. downward propagation of the gravity signal to the Earth.s surface. Scale discrete Tikhonov spherical regularization scaling function and wavelet packets were used to decompose and reconstruct the signal. We recovered the local gravity anomaly using only localized gravity measurements at the observing satellite.s altitude of 300 km. When the upward continued gravity anomaly to the satellite altitude with a resolution 0.5° was used as simulated measurement inputs, our model could recover the local surface gravity anomaly at a spatial resolution of 1° with an RMS error between 1 and 10 mGal, depending on the topography of the gravity field. Our study of the effect of varying the data volume and altering the maximum degree of Legendre polynomials on the accuracy of the recovered gravity solution suggests that the short wavelength signals and the regions with high magnitude gravity gradients respond more strongly to such changes. When tested with simulated SGG measurements, i.e. the second order radial derivative of the gravity anomaly, at an altitude of 300 km with a 0.7° spatial resolution as input data, our model could obtain the gravity anomaly with an RMS error of 1 ~ 7 mGal at a surface resolution of 0.7° (< 80 km). The study of the impact of measurement noise on the recovered gravity anomaly implies that the solutions from SGG measurements are less susceptible to measurement errors than those recovered from the upward continued gravity anomaly, indicating that the SGG type mission such as GOCE would be an ideal choice for implementing our model. Our simulation results demonstrate the model.s potential in determining the local gravity field at a finer scale than could be achieved through spherical harmonics, i.e. less than 100 km, with excellent performance in edge detection. / text
322

Methods of dynamical systems, harmonic analysis and wavelets applied to several physical systems

Petrov, Nikola Petrov 28 August 2008 (has links)
Not available / text
323

Fabric defect detection by wavelet transform and neural network

Lee, Tin-chi., 李天賜. January 2004 (has links)
published_or_final_version / abstract / toc / Electrical and Electronic Engineering / Master / Master of Philosophy
324

Patterned Jacquard fabric defect detection

Ngan, Yuk-tung, Henry., 顏旭東. January 2004 (has links)
published_or_final_version / abstract / toc / Electrical and Electronic Engineering / Master / Master of Philosophy
325

Design of 1-D and 2-D perfect reconstruction filter banks

陳志榮, Chan, Chi-wing. January 1996 (has links)
published_or_final_version / Electrical and Electronic Engineering / Master / Master of Philosophy
326

New design and realization techniques for perfect reconstruction two-channel filterbanks and wavelets bases

Pun, Ka-shun, Carson., 潘加信. January 2002 (has links)
published_or_final_version / Electrical and Electronic Engineering / Master / Master of Philosophy
327

Study of wavelet and the filter bank theory with application to image coding

Ni, Jiangqun., 倪江群. January 1998 (has links)
published_or_final_version / Electrical and Electronic Engineering / Doctoral / Doctor of Philosophy
328

Επεξεργασία εικόνων cDNA μικροσυστοιχιών βασισμένη σε μετασχηματισμούς κυματιδίων και τυχαίων πεδίων Markov / Complementary DNA microarray image processing based on wavelets and Markov random fields models

Μάντουκας, Θεόδωρος 27 April 2009 (has links)
Οι μικροσυστοιχίες συμπληρωματικού DNA (cDNA) αποτελούν αποδοτικά και αποτελεσματικά μέσα για την ταυτόχρονη ανάλυση της λειτουργίας δεκάδων χιλιάδων γονιδίων. Μια τυπική cDNA εικόνα μικροσυστοιχιών αποτελεί μια συλλογή πράσινων και κόκκινων κηλίδων (spots) που περιέχουν DNA. Κάθε κηλίδα καταλαμβάνει ένα μικρό τμήμα της εικόνας, με τη μέση τιμή της έντασης της κηλίδας να είναι στενά συνδεδεμένη με το επίπεδο έκφρασης του αντίστοιχου γονιδου. Η κύρια διαδικασία υπολογισμού της έντασης περιλαμβάνει τρία στάδια: Διευθυνσιοδότηση ή κατασκευή πλέγματος (gridding), κατάτμηση (Segmentation) και τέλος η διαδικασία εξαγωγής έντασης. Στη παρούσα εργασία, η διευθυνσιοδότηση πραγματοποιήθηκε χρησιμοποιώντας μια αυτόματη τεχνική κατασκευής πλέγματος στηριζόμενη στον συνεχή μετασχηματισμό κυματιδίων (CWT). Ποιο συγκεκριμένα,, υπολογίστηκαν τα προφίλ του χ και y άξονα της εικόνας. Δεύτερον, εφαρμόστηκε, σε κάθε προφίλ, ο CWT μετασχηματισμός εώς το 15 επίπεδο, χρησιμοποιώντας daubechies 4 (db4) ως μητρικό κυματίδιο. Τρίτον, υπολογίστηκε το άθροισμα των 15 επιπέδων για κάθε ένα από τα δύο σήματα x και y. Τέταρτον, εφαρμόστηκε στα δύο νέα σήματα τεχνική καταστολής θορύβου με χρήση μετασχηματισμού Wavelet. Τελικά, το κέντρο και όρια της κάθε κηλίδας καθορίστηκαν μέσω του υπολογισμού των τοπικών ελαχίστων και μέγιστων του κάθε σήματος. Για την κατάτμηση της εικόνας, μια νέα μέθοδος προτάθηκε, η οποία διακρίνεται σε τρία βασικά βήματα: Πρώτον, ο à trous μετασχηματισμός κυματιδίων (AWT) εφαρμόστηκε έως το δεύτερο επίπεδο στην αρχική εικόνα. Δεύτερον, στις λεπτομέρειες (details coefficients) του κάθε επιπέδου εφαρμόστηκε φίλτρο καταστολής θορύβου, προκειμένου να υποβαθμιστεί ο θόρυβος. Τρίτον, η αρχική εικόνα μαζί με τις προσεγγίσεις (approximations) και τις λεπτομέρειες (details) του κάθε επιπέδου εφαρμόστηκαν σε ένα συλλογικό σχήμα (ensemble scheme) στηριζόμενο στο MRF μοντέλο κατάτμησης. Ως τελεστές του σχήματος χρησιμοποιήθηκαν οι: Majority Vote , Min, Product και Probabilistic Product. Η αξιολόγηση των προτεινόμενων αλγορίθμων πραγματοποιήθηκε με τη χρήση υψηλής ποιότητας εικόνας προσομοίωσης αποτελούμενη από 1040 κηλίδες (spots) με ρεαλιστικά μορφολογικά χαρακτηριστικά, η οποία δημιουργήθηκε σύμφωνα με το μοντέλο προσημείωσης μικροσυστοιχιών του Matlab καθώς και 14 πραγματικές εικόνες, επτά 16-bit grayscale TIFF εικόνες από κάθε κανάλι (κόκκινο και πράσινο), οι οποίες αποκτήθηκαν από την ευρέως διαδεδομένη βάση δεδομένων DERICI. Επιπλέον, προκειμένου να παρατηρηθεί η συμπεριφορά των αλγορίθμων στη παρουσία θορύβου, η απομιμούμενη εικόνα υποβαθμίστηκε με τη προσθήκη λευκού Gaussian θορύβου. Η ακρίβεια της ακολουθούμενης διαδικασίας κατάτμησης, στη περίπτωση της εικόνας προσομοίωσης, προσδιορίστηκε μέσω του segmentation matching factor (SMF), probability of error (PE) και coefficient of determination (CD) με σεβασμό στη πραγματική κλάση στην οποία ανήκουν (φόντο-υπόβαθρο). Στη περίπτωση των πραγματικών εικόνων η αξιοπιστία των αλγορίθμων προσδιορίστηκε έμμεσα μετρώντας την ένταση κάθε κηλίδας, μέσω του Mean Absolute Error (MAE). Το σύνολο των αλγορίθμων, εφαρμοσμένο στην απομιμούμενη εικόνα, κατάφερε να οδηγήσει σε καλύτερο προσδιορισμό των κηλίδων σε σχέση με το απλό MRF μοντέλο κατάτμησης. Επιπλέον, ο τελεστής Majority Vote επέτυχε το υψηλότερο ποσοστό σε όλες τις περιπτώσεις, ειδικά σε κελία (cells) με υψηλή παρουσία θορύβου (SMF: 82.69%, PE: 6.60% and CV:0.809 ), ενώ το απλό μοντέλο περιορίστηκε στο χαμηλότερο ποσοστό (SMF:94.87%-82.69%, PE:3.03%-9.85%, CV:0.961-0.729). Στη περίπτωση των πραγματικών εικόνων ο min τελεστής επέτυχε το χαμηλότερο ποσοστό (MAE: 803.96 and Normalized MAE: 0.0738), σε αντίθεση με τον τελεστή Majority Vote, ο οποίος κατάφερε να επιτύχει το υψηλότερο ποσοστό ανάμεσα στους χρησιμοποιούμενους τελεστές (MAE 990.49 and Normalized MAE 0.0738).Επιπλέον όλοι οι προτεινόμενοι αλγόριθμοι κατάφεραν να μειώσουν τη μέση τιμή MAE σε σχέση με το απλό μοντέλο MRF (MAE 1183.50 and Normalized MAE 0,0859). / Complementary DNA microarrays are a powerful and efficient tool that uses genome sequence information to analyze the structure and function of tens of thousands of genes simultaneously. A typical cDNA microarray image is a collection of green and red discrete spots containing DNA. Each spot occupies a small fraction on the image and its mean fluorescence intensity is closely related to the expression level of the genes. The main process for measuring spot intensity values involves three tasks: gridding, segmentation and data extraction. In the present study, spot location was accomplished by using an automatic gridding method based on continues wavelet transform (CWT): Firstly, line-profiles for x and y axes were calculated. Secondly, the CWT was applied up to 15 scales to both profiles by using daubechies 4 (db4) as mother wavelet. Thirdly, a summation point by point of the signals of all the 15 scales was calculated. Fourthly, a hard-thresholding wavelet based technique was applied to each signal. Finally, spots centers and boundaries were defined by calculating the local maxima and the local minima on both signals. The proposed segmentation method is divided into three major steps: Firstly, à trous wavelet transform was applied up to second scale on the initial cell. Secondly, on the details coefficients, a hard threshold filter was carried out in order to suppress the noise. Finally, the initial image among the approximations and details of each scale were implemented in an ensemble scheme based on MRF model. As operators of the ensemble scheme were chosen: Majority Vote, Min, Product and Probabilistic Product. The validation of the proposed algorithms was accomplished by a high quality simulated microarray image of 1040 cells with realistic morphological characteristics generated by using the Matlab microarray simulation model and fourteen real cDNA microarray images, seven 16-bit grayscale TIFF images of both channels (green and red), collected from the DERICI public database. In order to investigate the performance of the algorithms in presence of noise, the simulated image was corrupted with additive white Gaussian noise. In the case of simulated image, the segmentation accuracy was evaluated by means of segmentation matching factor, probability of error and coefficient of determination in respect to the pixel actual classess (foreground-background pixels). In the case of real images the evaluation was based on Mean Absolute error (MAE), in order to measure indirectly their reliability. According to our results in simulated cells, the proposed ensemble schemes managed to lead to more accurate spot determination in comparison to conventional MRF model. Additionally, the majority vote operator managed to accomplish the highest score in all cases, especially on cells with high noise (SMF: 82.69%, PE: 6.60% and CV:0.809), while the conventional MRF managed to gather the lowest score in all cases (SMF:94.87%-82.69%, PE:3.03%-9.85%, CV:0.961-0.729). In the case of real images, the min operator achieved the lowest score (MAE: 803.96 and Normalized MAE: 0.0738) in contrast to majority vote, which reached the highest score among the proposed evaluating methods (MAE 990.49 and Normalized MAE 0.0738). Additionally, all the proposed algorithms managed to suppress MAE value compared to the conventional MRF segmentation model (MAE 1183.50 and Normalized MAE 0,0859).
329

DEVELOPMENT OF A MULTISCALE AND MULTIPHYSICS SIMULATION FRAMEWORK FOR REACTION-DIFFUSION-CONVECTION PROBLEMS

Mishra, Sudib Kumar January 2009 (has links)
Reaction-diffusion-convection (R-D-C) problems are governed by wide spectrum of spatio-temporal scales associated with ranges of physical and chemical processes. Such Problems are called multiscale, multiphysics problems. The challenge associated with R-D-C problems is to bridge these scales and processes as seamlessly as possible. For this purpose, we develop a wavelet-based multiscale simulation framework that couples diverse scales and physics.In a first stage we focus on R-D models. We treat the `fine' reaction-scales stochastically, with kinetic Monte Carlo (kMC). The transport via diffusion possesses larger spatio-temporal scales which are bridged to the kMC with the Compound Wavelet Matrix (CWM) formalism. Since R-D-C problems are dynamical we extend the CWM method via the dynamic-coupling of the kMC and diffusion models. The process is approximated by sequential increments, where the CWM on each increment is used as the starting point for the next, providing better exploration of phase-space. The CWM is extended to two-dimensional diffusion with a reactive line-boundary to show that the computational gain and error depends on the scale-overlap and wavelet-filtering. We improve the homogenization by a wavelet-based scheme for the exchange of information between a reactive and diffusive field by passing information along fine to coarse (up-scaling) and coarse to fine (down-scaling) scales by retaining the fine-scale statistics (higher-order moments, correlations). Critical to the success of the scheme is the identification of dominant scales. The efficiency of the scheme is compared to the homogenization and benchmark model with scale-disparity.To incorporate transport by convection, we then couple the Lattice Boltzmann Model (LBM) and kMC operating at diverse scales for flows around reactive block. Such model explores markedly different physics due to strong interplay between these time-scales. `Small' reaction induced temperature variations are considered for multiscale coupling of the reactions with the flow, showing the discrepancies in the evolutions and yield comparing to the conventional model. The same framework is used to study the reactions induced by hydrodynamic bubble collapse which shows the similar features of the kinetics and yield comparing to conventional models.We culminate to some problems that could be solved using the developed framework and preliminary results are presented as "proof of concept."
330

Low-complexity methods for image and video watermarking

Coria Mendoza, Lino Evgueni 05 1900 (has links)
For digital media, the risk of piracy is aggravated by the ease to copy and distribute the content. Watermarking has become the technology of choice for discouraging people from creating illegal copies of digital content. Watermarking is the practice of imperceptibly altering the media content by embedding a message, which can be used to identify the owner of that content. A watermark message can also be a set of instructions for the display equipment, providing information about the content’s usage restrictions. Several applications are considered and three watermarking solutions are provided. First, applications such as owner identification, proof of ownership, and digital fingerprinting are considered and a fast content-dependent image watermarking method is proposed. The scheme offers a high degree of robustness against distortions, mainly additive noise, scaling, low-pass filtering, and lossy compression. This method also requires a small amount of computations. The method generates a set of evenly distributed codewords that are constructed via an iterative algorithm. Every message bit is represented by one of these codewords and is then embedded in one of the image’s 8 × 8 pixel blocks. The information in that particular block is used in the embedding so as to ensure robustness and image fidelity. Two watermarking schemes designed to prevent theatre camcorder piracy are also presented. In these methods, the video is watermarked so that its display is not permitted if a compliant video player detects the watermark. A watermark that is robust to geometric distortions (rotation, scaling, cropping) and lossy compression is required in order to block access to media content that has been recorded with a camera inside a movie theatre. The proposed algorithms take advantage of the properties of the dual-tree complex wavelet transform (DT CWT). This transform offers the advantages of both the regular and the complex wavelets (perfect reconstruction, approximate shift invariance and good directional selectivity). Our methods use these characteristics to create watermarks that are robust to geometric distortions and lossy compression. The proposed schemes are simple to implement and outperform comparable methods when tested against geometric distortions.

Page generated in 0.0273 seconds