791 |
Ψηφιακός επεξεργαστής και γραφικά υπολογιστώνΡούντζας, Αθανάσιος 28 May 2009 (has links)
Η παρούσα μελέτη πραγματοποιήθηκε στα πλαίσια της διπλωματικής εργασίας «Ψηφιακός Επεξεργαστής και Γραφικά Υπολογιστών». Κατά τη διάρκεια αυτής της μελέτης είχαμε την ευκαιρία να προσεγγίσουμε και να κατανοήσουμε την διαδικασία που επιτελείται για την αναπαράσταση γραφικών στην οθόνη του υπολογιστή μας, του κινητού μας τηλεφώνου και κατ’ επέκταση να προτείνουμε νέες ιδέες και τεχνικές οι οποίες αποσκοπούν στην βελτίωση και αύξηση της αποδοτικότητας. Για το λόγο αυτό μελετώντας τους υπάρχοντες αλγόριθμους, καταλήξαμε στην πρόταση ενός νέου ο οποίος προορίζεται κυρίως για συσκευές απεικόνισης μικρού μεγέθους όπως τα κινητά τηλέφωνα και έχει ως βασικό σκοπό τη μείωση σύνθετων υπολογιστικών πράξεων και εξοικονόμηση ενέργειας. Στη παρούσα εργασία παρουσιάζεται η λειτουργία των υπαρχόντων αλγόριθμων καθώς και του προτεινόμενου. Πραγματοποιούνται συγκρίσεις μεταξύ αυτών προκειμένου να δούμε το ποσοστό βελτίωσης που υπεισέρχεται, ενώ αναπτύσσουμε και το υπόβαθρο για την υλοποίηση και εφαρμογή του προτεινόμενου αλγόριθμου στην πράξη. / -
|
792 |
Τρισδιάστατη αναπαράσταση δικτύου αγγείων σε δάκυλο ανθρώπου με επεξεργασία πολλαπλών υπέρυθρων φωτογραφιώνΧελιώτης, Γεώργιος 19 October 2009 (has links)
Το αντικείμενο της διπλωματικής εργασίας είναι η εξαγωγή σχεδίου φλεβών από φωτογραφίες ανθρώπινων δακτύλων με τη χρήση ενός αλγορίθμου κορυφογραμμής. Στην αρχή, αναλύουμε κάποιες μεθόδους προσωπικού προσδιορισμού. Στη συνέχεια, αναφερόμαστε στη μέθοδο Hitachi και στις αρχές που την διέπουν. Κατόπιν, εξάγουμε τον αλγόριθμο της κορυφογραμμής και τον εφαρμόζουμε σε δεδομένες φωτογραφίες. Παραθέτουμε τα συμπεράσματα και τις παρατηρήσεις μας. Τέλος, στο παράρτημα εκθέτουμε το σύνολο του αλγορίθμου γραμμένο σε κώδικα MATLAB. / This study focus on the extraction of vein pattern from human fingers images with the use of an algorithm ridge. Firstly, we analyze certain methods of personal definition. Secondly, we make a reference to the method Hitachi and the principles is governed by. Then, we extract the algorithm ridge and we apply it on certain images. We cite our conclusions and comments. Finally, in the Appendix we present an overview of the algorithm using the MATLAB code.
|
793 |
Inverse Parametric Alignment for Accurate Biological Sequence ComparisonKim, Eagu January 2008 (has links)
For as long as biologists have been computing alignments of sequences, the question of what values to use for scoring substitutions and gaps has persisted. In practice, substitution scores are usually chosen by convention, and gap penalties are often found by trial and error. In contrast, a rigorous way to determine parameter values that are appropriate for aligning biological sequences is by solving the problem of Inverse Parametric Sequence Alignment. Given examples of biologically correct reference alignments, this is the problem of finding parameter values that make the examples score as close as possible to optimal alignments of their sequences. The reference alignments that are currently available contain regions where the alignment is not specified, which leads to a version of the problem with partial examples.In this dissertation, we develop a new polynomial-time algorithm for Inverse Parametric Sequence Alignment that is simple to implement, fast in practice, and can learn hundreds of parameters simultaneously from hundreds of examples. Computational results with partial examples show that best possible values for all 212 parameters of the standard alignment scoring model for protein sequences can be computed from 200 examples in 4 hours of computation on a standard desktop machine. We also consider a new scoring model with a small number of additional parameters that incorporates predicted secondary structure for the protein sequences. By learning parameter values for this new secondary-structure-based model, we can improve on the alignment accuracy of the standard model by as much as 15% for sequences with less than 25% identity.
|
794 |
List-mode SPECT reconstruction using empirical likelihoodLehovich, Andre January 2005 (has links)
This dissertation investigates three topics related to imagereconstruction from list-mode Anger camera data. Our mainfocus is the processing of photomultiplier-tube (PMT)signals directly into images. First we look at the use of list-mode calibration data toreconstruct a non-parametric likelihood model relating theobject to the data list. The reconstructed model can thenbe combined with list-mode object data to produce amaximum-likelihood (ML) reconstruction, an approach we calldouble list-mode reconstruction. This trades off reducedprior assumptions about the properties of the imaging systemfor greatly increased processing time and increaseduncertainty in the reconstruction. Second we use the list-mode expectation-maximization (EM)algorithm to reconstruct planar projection images directlyfrom PMT data. Images reconstructed by EM are compared withimages produced using the faster and more common techniqueof first producing ML position estimates, then histogramingto form an image. A mathematical model of the human visualsystem, the channelized Hotelling observer, is used tocompare the reconstructions by performing the Rayleigh task,a traditional measure of resolution. EM is found to producehigher resolution images than the histogram approach,suggesting that information is lost during the positionestimation step. Finally we investigate which linear parameters of an objectare estimable, in other words may be estimated without biasfrom list-mode data. We extend the notion of a linearsystem operator, familiar from binned-mode systems, tolist-mode systems, and show the estimable parameters aredetermined by the range of the adjoint of the systemoperator. As in the binned-mode case, the list-modesensitivity functions define ``natural pixels'' with whichto reconstruct the object.
|
795 |
Development and implementation of an artificially intelligent search algorithm for sensor fault detection using neural networksSingh, Harkirat 30 September 2004 (has links)
This work is aimed towards the development of an artificially intelligent search algorithm used in conjunction with an Auto Associative Neural Network (AANN) to help locate and reconstruct faulty sensor inputs in control systems. The AANN can be trained to detect when sensors go faulty but the problem of locating the faulty sensor still remains. The search algorithm aids the AANN to help locate the faulty sensors and reconstruct their actual values. The algorithm uses domain specific heuristics based on the inherent behavior of the AANN to achieve its task. Common sensor errors such as drift, shift and random errors and the algorithms response to them have been studied. The issue of noise has also been investigated. These areas cover the first part of this work. The second part focuses on the development of a web interface that implements and displays the working of the algorithm. The interface allows any client on the World Wide Web to connect to the engineering software called MATLAB. The client can then simulate a drift, shift or random error using the graphical user interface and observe the response of the algorithm.
|
796 |
Accuracy aspects of the reaction-diffusion master equation on unstructured meshesKieri, Emil January 2011 (has links)
The reaction-diffusion master equation (RDME) is a stochastic model for spatially heterogeneous chemical systems. Stochastic models have proved to be useful for problems from molecular biology since copy numbers of participating chemical species often are small, which gives a stochastic behaviour. The RDME is a discrete space model, in contrast to spatially continuous models based on Brownian motion. In this thesis two accuracy issues of the RDME on unstructured meshes are studied. The first concerns the rates of diffusion events. Errors due to previously used rates are evaluated, and a second order accurate finite volume method, not previously used in this context, is implemented. The new discretisation improves the accuracy considerably, but unfortunately it puts constraints on the mesh, limiting its current usability. The second issue concerns the rates of bimolecular reactions. Using the macroscopic reaction coefficients these rates become too low when the spatial resolution is high. Recently, two methods to overcome this problem by calculating mesoscopic reaction rates for Cartesian meshes have been proposed. The methods are compared and evaluated, and are found to work remarkably well. Their possible extension to unstructured meshes is discussed.
|
797 |
Investigating the Correlation between Swallow Accelerometry Signal Parameters and Anthropometric and Demographic Characteristics of Healthy AdultsHanna, Fady 24 February 2009 (has links)
Thesis studied correlations between swallowing accelerometry parameters and anthropometrics in 50 healthy participants. Anthropometrics include: age, gender, weight, height, body fat percent, neck circumference and mandibular length. Dual-axis swallowing signals, from a biaxial accelerometer were obtained for 5-saliva and 10-water (5-wet and 5-wet chin-tuck) swallows per participant.
Two patient-independent automatic segmentation algorithms using discrete wavelet transforms of swallowing sequences segmented: 1) saliva/wet swallows and 2) wet chin-tuck swallows. Extraction of swallows hinged on dynamic thresholding based on signal statistics.
Canonical correlation analysis was performed on sets of anthropometric and swallowing signal variables including: variance, skewness, kurtosis, autocorrelation decay time, energy, scale and peak-amplitude. For wet swallows, significant linear relationships were found between signal and anthropometric variables. In superior-inferior directions, correlations linked weight, age and gender to skewness and signal-memory. In anterior-posterior directions, age was correlated with kurtosis and signal-memory. No significant relationship was observed for dry and wet chin-tuck swallowing
|
798 |
Computing sparse multiples of polynomialsTilak, Hrushikesh 20 August 2010 (has links)
We consider the problem of finding a sparse multiple of a polynomial. Given
a polynomial f ∈ F[x] of degree d over a field F, and a desired sparsity
t = O(1), our goal is to determine if there exists a multiple h ∈ F[x] of f
such that h has at most t non-zero terms, and if so, to find such an h.
When F = Q, we give a polynomial-time algorithm in d and the size of
coefficients in h. For finding binomial multiples we prove a polynomial bound
on the degree of the least degree binomial multiple independent of coefficient
size.
When F is a finite field, we show that the problem is at least as hard as
determining the multiplicative order of elements in an extension field of F
(a problem thought to have complexity similar to that of factoring integers),
and this lower bound is tight when t = 2.
|
799 |
Local Likelihood for Interval-censored and Aggregated Point Process DataFan, Chun-Po Steve 03 March 2010 (has links)
The use of the local likelihood method (Tibshirani and Hastie, 1987; Loader, 1996) in the presence of interval-censored or aggregated data leads to a natural consideration of an EM-type strategy, or rather a local EM algorithm. In the thesis, we consider local EM to analyze the point process data that are either interval-censored or aggregated into regional counts. We specifically formulate local EM algorithms for density, intensity and risk estimation and implement the algorithms using a piecewise constant function. We demonstrate that the use of the piecewise constant function at the E-step explicitly results in an iteration that involves an expectation, maximization and smoothing step, or an EMS algorithm considered in Silverman, Jones, Wilson and Nychka (1990). Consequently, we reveal a previously unknown connection between local EM and the EMS algorithm.
From a theoretical perspective, local EM and the EMS algorithm complement each other. Although the statistical methodology literature often characterizes EMS methods as ad hoc, local likelihood suggests otherwise as the EMS algorithm arises naturally from a local likelihood consideration in the context of point processes. Moreover, the EMS algorithm not only serves as a convenient implementation of the local EM algorithm but also provides a set of theoretical tools to better understand the role of local EM. In particular, we present results that reinforce the suggestion that the pair of local EM and penalized likelihood are analogous to that of EM and likelihood. Applications include the analysis of bivariate interval-censored data as well as disease mapping for a rare disease, lupus, in the Greater Toronto Area.
|
800 |
Local Likelihood for Interval-censored and Aggregated Point Process DataFan, Chun-Po Steve 03 March 2010 (has links)
The use of the local likelihood method (Tibshirani and Hastie, 1987; Loader, 1996) in the presence of interval-censored or aggregated data leads to a natural consideration of an EM-type strategy, or rather a local EM algorithm. In the thesis, we consider local EM to analyze the point process data that are either interval-censored or aggregated into regional counts. We specifically formulate local EM algorithms for density, intensity and risk estimation and implement the algorithms using a piecewise constant function. We demonstrate that the use of the piecewise constant function at the E-step explicitly results in an iteration that involves an expectation, maximization and smoothing step, or an EMS algorithm considered in Silverman, Jones, Wilson and Nychka (1990). Consequently, we reveal a previously unknown connection between local EM and the EMS algorithm.
From a theoretical perspective, local EM and the EMS algorithm complement each other. Although the statistical methodology literature often characterizes EMS methods as ad hoc, local likelihood suggests otherwise as the EMS algorithm arises naturally from a local likelihood consideration in the context of point processes. Moreover, the EMS algorithm not only serves as a convenient implementation of the local EM algorithm but also provides a set of theoretical tools to better understand the role of local EM. In particular, we present results that reinforce the suggestion that the pair of local EM and penalized likelihood are analogous to that of EM and likelihood. Applications include the analysis of bivariate interval-censored data as well as disease mapping for a rare disease, lupus, in the Greater Toronto Area.
|
Page generated in 0.0367 seconds