• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 97
  • 66
  • 15
  • 13
  • 11
  • 6
  • 6
  • 4
  • 3
  • 2
  • 2
  • 1
  • Tagged with
  • 250
  • 63
  • 42
  • 40
  • 36
  • 35
  • 33
  • 31
  • 27
  • 27
  • 24
  • 23
  • 21
  • 21
  • 19
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
241

Development and improvement of methods for characterization of HPLC stationary phases

Undin, Torgny January 2011 (has links)
High Performance Liquid Chromatography (HPLC) is a widely used tech-nique both for detecting and purifying substances in academy and in the industry. In order to facilitate the use of, and knowledge in HPLC, character-ization of stationary phases is of utmost importance. Tailor made characteri-zation methods and workflows are steadily increasing the speed and accura-cy in which new separation systems and methods are developed. In the field fundamental separation science and of preparative chromatography there is always the need for faster and more accurate methods of adsorption isotherm determination. Some of that demand are met with the steadily increase of computational power, but the practical aspects on models and methods must also be further developed. These nonlinear characterization methods will not only give models capable of describing the adsorption isotherm but also actual values of local adsorption energies and monolayer saturation capacity of an individual interaction sites etc.The studies presented in this thesis use modern alkali stable stationary phas-es as a model phase, which will give an insight in hybrid materials and their separation mechanism. This thesis will include an update and expansion in using the Elution by Characteristic Points (ECP) method for determination of adsorption isotherms. The precision is even further increased due to the ability to use slope data as well as an increase in usability by assigning a set of guidance rules to be applied when determine adsorption isotherms having inflection points. This thesis will further provide the reader with information about stationary phase characterization and the power of using existing tech-niques; combine them with each other, and also what the expansion of meth-ods can revile in terms of precision and increased usability. A more holistic view of what benefits that comes with combining a non-linear characteriza-tion of a stationary phase with more common linear characterization meth-ods are presented.
242

[en] CONTINUOUS SPEECH RECOGNITION BY COMBINING MFCC AND PNCC ATTRIBUTES WITH SS, WD, MAP AND FRN METHODS OF ROBUSTNESS / [pt] RECONHECIMENTO DE VOZ CONTINUA COMBINANDO OS ATRIBUTOS MFCC E PNCC COM METODOS DE ROBUSTEZ SS, WD, MAP E FRN

CHRISTIAN DAYAN ARCOS GORDILLO 09 June 2014 (has links)
[pt] O crescente interesse por imitar o modelo que rege o processo cotidiano de comunicação humana através de maquinas tem se convertido em uma das áreas do conhecimento mais pesquisadas e de grande importância nas ultimas décadas. Esta área da tecnologia, conhecida como reconhecimento de voz, em como principal desafio desenvolver sistemas robustos que diminuam o ruído aditivo dos ambientes de onde o sinal de voz é adquirido, antes de que se esse sinal alimente os reconhecedores de voz. Por esta razão, este trabalho apresenta quatro formas diferentes de melhorar o desempenho do reconhecimento de voz contınua na presença de ruído aditivo, a saber: Wavelet Denoising e Subtração Espectral, para realce de fala e Mapeamento de Histogramas e Filtro com Redes Neurais, para compensação de atributos. Esses métodos são aplicados isoladamente e simultaneamente, afim de minimizar os desajustes causados pela inserção de ruído no sinal de voz. Alem dos métodos de robustez propostos, e devido ao fato de que os e conhecedores de voz dependem basicamente dos atributos de voz utilizados, examinam-se dois algoritmos de extração de atributos, MFCC e PNCC, através dos quais se representa o sinal de voz como uma sequência de vetores que contêm informação espectral de curtos períodos de tempo. Os métodos considerados são avaliados através de experimentos usando os software HTK e Matlab, e as bases de dados TIMIT (de vozes) e NOISEX-92 (de ruído). Finalmente, para obter os resultados experimentais, realizam-se dois tipos de testes. No primeiro caso, é avaliado um sistema de referência baseado unicamente em atributos MFCC e PNCC, mostrando como o sinal é fortemente degradado quando as razões sinal-ruıdo são menores. No segundo caso, o sistema de referência é combinado com os métodos de robustez aqui propostos, analisando-se comparativamente os resultados dos métodos quando agem isolada e simultaneamente. Constata-se que a mistura simultânea dos métodos nem sempre é mais atraente. Porem, em geral o melhor resultado é obtido combinando-se MAP com atributos PNCC. / [en] The increasing interest in imitating the model that controls the daily process of human communication trough machines has become one of the most researched areas of knowledge and of great importance in recent decades. This technological area known as voice recognition has as a main challenge to develop robust systems that reduce the noisy additive environment where the signal voice was acquired. For this reason, this work presents four different ways to improve the performance of continuous speech recognition in presence of additive noise, known as Wavelet Denoising and Spectral Subtraction for enhancement of voice, and Mapping of Histograms and Filter with Neural Networks to compensate for attributes. These methods are applied separately and simultaneously two by two, in order to minimize the imbalances caused by the inclusion of noise in voice signal. In addition to the proposed methods of robustness and due to the fact that voice recognizers depend mainly on the attributes voice used, two algorithms are examined for extracting attributes, MFCC, and PNCC, through which represents the voice signal as a sequence of vectors that contain spectral information for short periods of time. The considered methods are evaluated by experiments using the HTK and Matlab software, and databases of TIMIT (voice) and Noisex-92 (noise). Finally, for the experimental results, two types of tests were carried out. In the first case a reference system was assessed based on MFCC and PNCC attributes, only showing how the signal degrades strongly when signal-noise ratios are higher. In the second case, the reference system is combined with robustness methods proposed here, comparatively analyzing the results of the methods when they act alone and simultaneously. It is noted that simultaneous mix of methods is not always more attractive. However, in general, the best result is achieved by the combination of MAP with PNCC attributes.
243

Methodische und klinische Evaluation eines modernen Flachbettdetektors und des Dual Energy Verfahrens

Freund, Torsten 28 April 2006 (has links)
In einer initialen Studie verglichen wir das XQi Revolution, welches auf indirektem CsI (Cäsium Iodit) /a: Si (amorphes Silizium) basiert mit einem direkten Digitalröntgengerät a: SE (amorphes Selen) an einem CDRAD-Phantom bei vier unterschiedlichen Eintrittsdosen und an einem TRG-Phantom bei zwei unterschiedlichen Eintrittsdosen. Mittels des berechneten Bildqualitätsfaktors des CDRAD-Phantoms konnten wir zeigen, daß das indirekte im Vergleich zum direkten System bei niedrigeren Dosen eine bessere Detailerkennungsrate aufweist. Ein positiver Trend läßt sich auch beim TRG-Phantom darstellen. In einer weiteren Studie untersuchten wir anhand von Patientenbildern die Bildqualität des Dual Energy Systems bei zwei unterschiedlichen Dosisniveaus, der Standarddosis sowie einer doppelten Dosis, was einem Speed-Äquivalent von 400/1000 bzw. 200/500 entspricht. Bei hoher Dosis konnten wir eine signifikante Reduktion des Rauschens im Knochen- und Weichteilbild feststellen, gleichzeitig nahmen die Störungen durch Bewegungsartefakte signifikant zu. Im Anschluß verglichen wir die Erkennbarkeit verkalkter Lungenpathologien im Standard P/A Bild mit zusätzlichem Einsatz von Dual Energy. Als Goldstandard erfolgte der sichere Nachweis der Pathologien im CT. Bei zusätzlichem Einsatz von Dual Energy konnten wir eine signifikante Steigerung der Sensitivität erkennen. Dieses Ergebnis wurde durch den Qualitätsfaktor, der die Bildeigenschaften kumulativ beschreibt, bestätigt. Weiterhin untersuchten wir analog die Erkennbarkeit von nichtverkalkten Lungenrundherden. Auch bei diesen Pathologien ließ sich ein positiver Trend der Sensitivität und Spezifität bei zusätzlichem Einsatz von Dual Energy erkennen. Zusätzlich stieg die durchschnittliche Entscheidungssicherheit der Gutachter signifikant an. Damit bietet die Dual Energy Subtraktionstechnik eine wertvolle Ergänzung in der Diagnostik verkalkter und nichtverkalkter Lungenpathologien eine wertvolle Ergänzung zum Standardröntgen. / First study assess and quantify the image quality at two dose levels for an amorphous Silicon (a:Si) Cesium Iodide (CsI) flat panel system compared with a direct amorphous Selenium (a:Se) digital radiography system. Image quality of a:Si flat panel digital radiography proved to be superior to a:Se drum digital radiography using low-dose settings. Second study assess the image quality of subtracted soft tissue and bone images of a CsIdetector-based dual-energy system for chest radiography at varying dose levels. Radiation dose did not significantly influence the perception of dual-energy image quality. Next study assess the value of dual-energy chest radiography obtained using a cesium iodide flat-panel detector in addition to standard posteroanterior chest radiography for the detection of calcified chest abnormalities. When dual-energy images were added, sensitivity increased significantly. Brunner and Langer’s test revealed a highly significant difference between posteroanterior chest radiography and dual-energy imaging in the detection of calcified chest abnormalities. Dual-energy images added to standard posteroanterior chest radiographs significantly improve the detection of calcified chest lesions. Last study compare the sensitivity and specificity of digital chest radiography alone with digital chest radiography combined with dual-energy chest radiography in the detection of small non-calcified pulmonary nodules. Standard and dual-energy radiographs were obtained with a flat-panel digital chest system. The increase of nodule detection overall as well as for different size categories was significant. The increase of the confidence level rating was also significant. Dual energy added to standard posteroanterior chest radiography significantly improves the sensitivity, specificity, and confidence in detection of small non-calcified pulmonary nodules. Dual-energy subtraction has the potential to become a future routine application in chest radiography.
244

Αθηρωμάτωση του συστήματος των βρογχικών αρτηριών και πιθανός συσχετισμός με την στεφανιαία κυκλοφορία

Κωτούλας, Χριστόφορος 22 December 2008 (has links)
Σκοπός: Διεξάγαμε την παρούσα μελέτη για να καταδείξουμε την ύπαρξη των βρογχικο-στεφανιαίων αναστομώσεων στο πειραματικό μοντέλο του χοίρου. Επιπλέον διερευνήσαμε την επίπτωση της αρτηριοσκλήρυνσης στις βρογχικές αρτηρίες. Υλικό – Μέθοδος: Χρησιμοποιήθηκαν τα παρασκευάσματα καρδιάς και πνευμόνων από 6 χοίρους. Επιπλέον, δείγματα βρογχικών αρτηριών ελήφθησαν από 40 ασθενείς που υποβάλλονταν σε θωρακοτομή. Σημειώθηκαν αναλυτικά οι κλινικοί και εργαστηριακοί παράγοντες κινδύνου για ανάπτυξη αρτηριοσκλήρυνσης. Αποτελέσματα: Με υπολογιστική τομογραφία, ψηφιακή αγγειογραφία και χορήγηση χρωστικής ρητίνης καταδείξαμε το αναστομωτικό δίκτυο μεταξύ των βρογχικών και κυρίως των αριστερών στεφανιαίων αρτηριών σε 5 από τα 6 παρασκευάσματα. Η μικροσκοπική εξέταση των δειγμάτων δεν στοιχειοθέτησε ύπαρξη αθηροσκλήρυνσης, παρά μόνο ύπαρξη ασβεστοποιού σκλήρυνσης του μέσου χιτώνα σε ποσοστό 2.5%, που δεν συσχετίστηκε με τους παράγοντες κινδύνου αρτηριοσκλήρυνσης. Συμπεράσματα: Με δεδομένο ότι βρογχικές αρτηρίες παρουσιάζουν ελάχιστο βαθμό ασβεστοποιού σκλήρυνσης του μέσου χιτώνα., υποθέτουμε ότι θα μπορούσαν να συνδράμουν στη στεφανιαία κυκλοφορία μέσω των προαναφερθεισών αναστομώσεων σε καταστάσεις εκσεσημασμένης στεφανιαίας νόσου. Η μελέτη μας υπογραμμίζει την σπουδαιότητα των βρογχικών αρτηριών και των βρογχικο-στεφανιαίων αναστομώσεων σε περιπτώσεις εμβολισμού των βρογχικών αρτηριών, μεταμοσχεύσεων καρδιάς-πνευμόνων και αντιμετώπισης ανευρυσμάτων θωρακικής αορτής. / Aim of the study: We conducted this study to demonstrate the coronary-bronchial anastomotic routes in a porcine model. Additionally, we estimated the incidence of bronchial arteries arteriosclerosis. Material and Methods: Six heart-lung porcine blocks were used. Furthermore, 40 bronchial arteries were obtained from patients who underwent thoracotomy. Detailed clinical and laboratory atherosclerotic risk factors of the patients were documented. Results: Using CT-scan, Digital Subtraction Angiography and colored latex, we demonstrated communications between the bronchial and coronary circulation in 5 of 6 subjects. Histology revealed no established atherosclerotic lesion and narrowing of the lumen, but medial calcific sclerosis in 2.5%, that was independent from the arteriosclerotic risk factors. Conclusions: As evidence suggests that bronchial arteries only exhibit medial calcific sclerosis, we hypothesize that bronchial arteries can contribute to the coronary flow through the broncho-coronary anastomoses in cases of severe coronary artery disease. Our study emphasizes their importance and their anastomoses to coronaries in cases of embolization, heart-lung transplantation and thoracic aorta aneurysms repair.
245

Ενίσχυση σημάτων μουσικής υπό το περιβάλλον θορύβου

Παπανικολάου, Παναγιώτης 20 October 2010 (has links)
Στην παρούσα εργασία επιχειρείται η εφαρμογή αλγορίθμων αποθορυβοποίησης σε σήματα μουσικής και η εξαγωγή συμπερασμάτων σχετικά με την απόδοση αυτών ανά μουσικό είδος. Η κύρια επιδίωξη είναι να αποσαφηνιστούν τα βασικά προβλήματα της ενίσχυσης ήχων και να παρουσιαστούν οι διάφοροι αλγόριθμοι που έχουν αναπτυχθεί για την επίλυση των προβλημάτων αυτών. Αρχικά γίνεται μία σύντομη εισαγωγή στις βασικές έννοιες πάνω στις οποίες δομείται η τεχνολογία ενίσχυσης ομιλίας. Στην συνέχεια εξετάζονται και αναλύονται αντιπροσωπευτικοί αλγόριθμοι από κάθε κατηγορία τεχνικών αποθορυβοποίησης, την κατηγορία φασματικής αφαίρεσης, την κατηγορία στατιστικών μοντέλων και αυτήν του υποχώρου. Για να μπορέσουμε να αξιολογήσουμε την απόδοση των παραπάνω αλγορίθμων χρησιμοποιούμε αντικειμενικές μετρήσεις ποιότητας, τα αποτελέσματα των οποίων μας δίνουν την δυνατότητα να συγκρίνουμε την απόδοση του κάθε αλγορίθμου. Με την χρήση τεσσάρων διαφορετικών μεθόδων αντικειμενικών μετρήσεων διεξάγουμε τα πειράματα εξάγοντας μια σειρά ενδεικτικών τιμών που μας δίνουν την ευχέρεια να συγκρίνουμε είτε τυχόν διαφοροποιήσεις στην απόδοση των αλγορίθμων της ίδιας κατηγορίας είτε διαφοροποιήσεις στο σύνολο των αλγορίθμων. Από την σύγκριση αυτή γίνεται εξαγωγή χρήσιμων συμπερασμάτων σχετικά με τον προσδιορισμό των παραμέτρων κάθε αλγορίθμου αλλά και με την καταλληλότητα του κάθε αλγορίθμου για συγκεκριμένες συνθήκες θορύβου και για συγκεκριμένο μουσικό είδος. / This thesis attempts to apply Noise Reduction algorithms to signals of music and draw conclusions concerning the performance of each algorithm for every musical genre. The main aims are to clarify the basic problems of sound enhancement and present the various algorithms developed for solving these problems. After a brief introduction to basic concepts on sound enhancement we examine and analyze various algorithms that have been proposed at times in the literature for speech enhancement. These algorithms can be divided into three main classes: spectral subtractive algorithms, statistical-model-based algorithms and subspace algorithms. In order to evaluate the performance of the above algorithms we use objective measures of quality, the results of which give us the opportunity to compare the performance of each algorithm. By using four different methods of objective measures to conduct the experiments we draw a set of values that facilitate us to make within-class algorithm comparisons and across-class algorithm comparisons. From these comparisons we can draw conclusions on the determination of parameters for each algorithm and the appropriateness of algorithms for specific noise conditions and music genre.
246

Avaliação do processo de raparo de lesões periopicais pós-tratamento endodôntico por meio de subtração digital radiográfica / Evaluation of the process of repair of periapical lesions after endodontic treatment by digital subtraction radiography

SILVA, Janaína Benfica e 30 November 2006 (has links)
Made available in DSpace on 2014-07-29T15:22:00Z (GMT). No. of bitstreams: 1 Parte 1.pdf: 1874528 bytes, checksum: 7e46501ade5b782c1d4f3f9926c18a43 (MD5) Previous issue date: 2006-11-30 / Control of the process of repair or progression of periapical lesions after endodontic treatment is monitored by conventional or digital radiography. In this research digital subtraction radiography (DSR) was used that uses the subtraction of images longitudinally, in which the change in the alveolar bone is visualized against a uniform gray background. The objectives of this study were: (1) to evaluate the repair process of periapical lesions after endodontic treatment by using DSR; (2) to quantify by means of point/pixel (picture element), area (histogram) and linear measures (profile line), the gain or loss of mineral density in the area of the lesion, using the average of the pixel values; (3) to compare the diagnostic information, suggestive of the repair process, obtained through a subjective evaluation of DSR with a conventional radiographic evaluation and digitalized image and (4) to evaluate the contribution of DSR to an early identification of the repair of periapical lesions after endodontic treatment. The sample consisted of twelve patients with a total of seventeen periapical lesions. The x-rays were digitalized and submitted to DSR using DSR software. The pixel values of the subtracted images were determined by using Image Tool software. Both the conventional x-rays as well as the digitalized and subtracted images were qualitatively evaluated. The results showed a gain in mineral density with a meandp of 133.495.17, 130.275.77 and 129.414.46 for the points/pixel, histogram and profile line tools, respectively. In the evaluation of numerical gain Pearson s Coefficient of Correlation (r) presented these values: mean of points/histogram = 0.746; mean of points/profile line = 0.724 and histogram/profile line = 0.860. When the numerical values were transformed into percentile gain meandp of 0.674.01, 1.214.33 and 1.163.36 were obtained for the points/pixels, histogram and profile line tools, respectively. In the evaluation of the percentile gain Spearman s Coefficient of Correlation (rs) showed the following values: mean of points/histogram = 0.697; mean of the points/profile line = 0.646 and histogram/profile line = 0.844. In the qualitative analysis, the frequency of success in the ordering of the correct sequence of the repair process using conventional radiography, digitalized image and DSR was 37.3%, 31.4% and 31.4%, respectively. One concluded, therefore, that: (1) the process of repair of periapical lesions after endodontic treatment can be evaluated quantitatively by means of longitudinal analysis using DSR; (2) any one of the three tools can be used to quantify the repair, considering that correlation exists between the time of repair and the increase of the value of pixel; (3) the comparative evaluation between the subjective methods using conventional radiography, digitalized image and SDR, it showed that all had been capable to evidence the process of repair of periapical lesions from the first radiography (15 days), not having difference between them and (4) the quantitative evaluation by SDR obtained to after evidence the beginning of the repair with 15 days the beginning of the endodontic treatment, even so this repair was really effective from 105 days after the beginning of the endodontic treatment. / O controle do processo de reparo ou progressão de lesões periapicais pós-tratamento endodôntico é monitorado pelo exame radiográfico convencional ou digital. Nesta pesquisa foi utilizada a subtração digital radiográfica (SDR), que utiliza a subtração de imagens longitudinalmente, na qual a mudança no osso alveolar é visualizada contra um plano de fundo (background) cinza homogêneo. Os objetivos desse estudo foram: (1) avaliar o processo de reparo de lesões periapicais pós-tratamento endodôntico por meio de SDR; (2) quantificar por meio de ponto/pixel (picture element), área (histograma) e medida linear (perfil linha) na área da lesão, o ganho ou perda de densidade mineral por meio da média dos valores dos pixels; (3) comparar as informações diagnósticas, sugestivas do processo de reparo, obtidas por meio da avaliação subjetiva da SDR com a avaliação radiográfica convencional e imagem digitalizada; e (4) avaliar a contribuição da SDR na identificação precoce do reparo de lesões periapicais pós-tratamento endodôntico. A amostra constituiu-se de doze indivíduos totalizando dezessete lesões periapicais. As radiografias foram digitalizadas e submetidas à SDR utilizando o programa DSR. As imagens subtraídas tiveram os valores de pixel determinados utilizando o programa Image Tool. Tanto as radiografias convencionais quanto as imagens digitalizadas e subtraídas foram avaliadas qualitativamente. Os resultados evidenciaram ganho de densidade mineral com médiadp de 133,495,17; 130,275,77; 129,414,46 para as ferramentas ponto/pixel; histograma e perfil linha respectivamente. Na avaliação do ganho numérico o Coeficiente de Correlação de Pearson (r) mostrou valores de: média dos pontos/ histograma = 0,746; média dos pontos/ perfil linha = 0,724 e histograma/ perfil linha = 0,860. Quando os valores numéricos foram transformados em ganho percentual foram obtidas médiadp de 0,674,01; 1,214,33; 1,163,36 para as ferramentas ponto/pixel; histograma e perfil linha respectivamente. Na avaliação do ganho percentual o Coeficiente de Correlação de Spearman (rs) mostrou valores de: média dos pontos/ histograma = 0,697; média dos pontos/ perfil linha = 0,646 e histograma/ perfil linha = 0,844. Na análise qualitativa, a freqüência de acertos na ordenação da seqüência correta do processo de reparo usando radiografia convencional, imagem digitalizada e SDR foi de 37,3%; 31,4% e 31,4% respectivamente. Concluiu-se, portanto, que: (1) o processo de reparo de lesões periapicais pós-tratamento endodôntico pode ser avaliado quantitativamente por meio de análise longitudinal com SDR (2) qualquer uma das três ferramentas pode ser utilizada para quantificar o reparo, considerando que existe correlação entre o tempo de reparo e o aumento do valor de pixel; (3) a avaliação comparativa entre os métodos subjetivos, usando radiografia convencional, imagem digitalizada e a SDR, mostrou que todos foram capazes de evidenciar o processo de reparo de lesões periapicais desde a primeira radiografia (15 dias), não havendo diferença entre eles e (4) a avaliação quantitativa por meio de SDR conseguiu evidenciar o início do reparo com 15 dias após o início do tratamento endodôntico, embora esse reparo fosse realmente efetivo a partir de 105 dias após o início do tratamento endodôntico.
247

DSA Image Registration And Respiratory Motion Tracking Using Probabilistic Graphical Models

Sundarapandian, Manivannan January 2016 (has links) (PDF)
This thesis addresses three problems related to image registration, prediction and tracking, applied to Angiography and Oncology. For image analysis, various probabilistic models have been employed to characterize the image deformations, target motions and state estimations. (i) In Digital Subtraction Angiography (DSA), having a high quality visualization of the blood motion in the vessels is essential both in diagnostic and interventional applications. In order to reduce the inherent movement artifacts in DSA, non-rigid image registration is used before subtracting the mask from the contrast image. DSA image registration is a challenging problem, as it requires non-rigid matching across spatially non-uniform control points, at high speed. We model the problem of sub-pixel matching, as a labeling problem on a non-uniform Markov Random Field (MRF). We use quad-trees in a novel way to generate the non uniform grid structure and optimize the registration cost using graph-cuts technique. The MRF formulation produces a smooth displacement field which results in better artifact reduction than with the conventional approach of independently registering the control points. The above approach is further improved using two models. First, we introduce the concept of pivotal and non-pivotal control points. `Pivotal control points' are nodes in the Markov network that are close to the edges in the mask image, while 'non-pivotal control points' are identified in soft tissue regions. This model leads to a novel MRF framework and energy formulation. Next, we propose a Gaussian MRF model and solve the energy minimization problem for sub-pixel DSA registration using Random Walker (RW). An incremental registration approach is developed using quad-tree based MRF structure and RW, wherein the density of control points is hierarchically increased at each level M depending of the features to be used and the required accuracy. A novel numbering scheme of the control points allows us to reuse the computations done at level M in M + 1. Both the models result in an accelerated performance without compromising on the artifact reduction. We have also provided a CUDA based design of the algorithm, and shown performance acceleration on a GPU. We have tested the approach using 25 clinical data sets, and have presented the results of quantitative analysis and clinical assessment. (ii) In External Beam Radiation Therapy (EBRT), in order to monitor the intra fraction motion of thoracic and abdominal tumors, the lung diaphragm apex can be used as an internal marker. However, tracking the position of the apex from image based observations is a challenging problem, as it undergoes both position and shape variation. We propose a novel approach for tracking the ipsilateral hemidiaphragm apex (IHDA) position on CBCT projection images. We model the diaphragm state as a spatiotemporal MRF, and obtain the trace of the apex by solving an energy minimization problem through graph-cuts. We have tested the approach using 15 clinical data sets and found that this approach outperforms the conventional full search method in terms of accuracy. We have provided a GPU based heterogeneous implementation of the algorithm using CUDA to increase the viability of the approach for clinical use. (iii) In an adaptive radiotherapy system, irrespective of the methods used for target observations there is an inherent latency in the beam control as they involve mechanical movement and processing delays. Hence predicting the target position during `beam on target' is essential to increase the control precision. We propose a novel prediction model (called o set sine model) for the breathing pattern. We use IHDA positions (from CBCT images) as measurements and an Unscented Kalman Filter (UKF) for state estimation. The results based on 15 clinical datasets show that, o set sine model outperforms the state of the art LCM model in terms of prediction accuracy.
248

Optický radar s využitím dvouosého kamerového manipulátoru / Optical Localization System with a Pan/Tilt Camera

Senčuch, Daniel January 2018 (has links)
The effective surveillance of large critical areas is crucial for their security and privacy. There is no publicly available and acceptable solution of automating this task. This thesis aims to create an application utilizing a combination of a pan-tilt robotic manipulator and a visible-spectrum camera. Based on the pan-tilt unit's position and camera's images, the application searches for semantically significant changes in the captured environment and marks these regions of interest.
249

Access Blood Flow Measurement Using Angiography

Koirala, Nischal 26 September 2018 (has links)
No description available.
250

Robust Subspace Estimation Using Low-rank Optimization. Theory And Applications In Scene Reconstruction, Video Denoising, And Activity Recognition.

Oreifej, Omar 01 January 2013 (has links)
In this dissertation, we discuss the problem of robust linear subspace estimation using low-rank optimization and propose three formulations of it. We demonstrate how these formulations can be used to solve fundamental computer vision problems, and provide superior performance in terms of accuracy and running time. Consider a set of observations extracted from images (such as pixel gray values, local features, trajectories . . . etc). If the assumption that these observations are drawn from a liner subspace (or can be linearly approximated) is valid, then the goal is to represent each observation as a linear combination of a compact basis, while maintaining a minimal reconstruction error. One of the earliest, yet most popular, approaches to achieve that is Principal Component Analysis (PCA). However, PCA can only handle Gaussian noise, and thus suffers when the observations are contaminated with gross and sparse outliers. To this end, in this dissertation, we focus on estimating the subspace robustly using low-rank optimization, where the sparse outliers are detected and separated through the `1 norm. The robust estimation has a two-fold advantage: First, the obtained basis better represents the actual subspace because it does not include contributions from the outliers. Second, the detected outliers are often of a specific interest in many applications, as we will show throughout this thesis. We demonstrate four different formulations and applications for low-rank optimization. First, we consider the problem of reconstructing an underwater sequence by removing the iii turbulence caused by the water waves. The main drawback of most previous attempts to tackle this problem is that they heavily depend on modelling the waves, which in fact is ill-posed since the actual behavior of the waves along with the imaging process are complicated and include several noise components; therefore, their results are not satisfactory. In contrast, we propose a novel approach which outperforms the state-of-the-art. The intuition behind our method is that in a sequence where the water is static, the frames would be linearly correlated. Therefore, in the presence of water waves, we may consider the frames as noisy observations drawn from a the subspace of linearly correlated frames. However, the noise introduced by the water waves is not sparse, and thus cannot directly be detected using low-rank optimization. Therefore, we propose a data-driven two-stage approach, where the first stage “sparsifies” the noise, and the second stage detects it. The first stage leverages the temporal mean of the sequence to overcome the structured turbulence of the waves through an iterative registration algorithm. The result of the first stage is a high quality mean and a better structured sequence; however, the sequence still contains unstructured sparse noise. Thus, we employ a second stage at which we extract the sparse errors from the sequence through rank minimization. Our method converges faster, and drastically outperforms state of the art on all testing sequences. Secondly, we consider a closely related situation where an independently moving object is also present in the turbulent video. More precisely, we consider video sequences acquired in a desert battlefields, where atmospheric turbulence is typically present, in addition to independently moving targets. Typical approaches for turbulence mitigation follow averaging or de-warping techniques. Although these methods can reduce the turbulence, they distort the independently moving objects which can often be of great interest. Therefore, we address the iv problem of simultaneous turbulence mitigation and moving object detection. We propose a novel three-term low-rank matrix decomposition approach in which we decompose the turbulence sequence into three components: the background, the turbulence, and the object. We simplify this extremely difficult problem into a minimization of nuclear norm, Frobenius norm, and `1 norm. Our method is based on two observations: First, the turbulence causes dense and Gaussian noise, and therefore can be captured by Frobenius norm, while the moving objects are sparse and thus can be captured by `1 norm. Second, since the object’s motion is linear and intrinsically different than the Gaussian-like turbulence, a Gaussian-based turbulence model can be employed to enforce an additional constraint on the search space of the minimization. We demonstrate the robustness of our approach on challenging sequences which are significantly distorted with atmospheric turbulence and include extremely tiny moving objects. In addition to robustly detecting the subspace of the frames of a sequence, we consider using trajectories as observations in the low-rank optimization framework. In particular, in videos acquired by moving cameras, we track all the pixels in the video and use that to estimate the camera motion subspace. This is particularly useful in activity recognition, which typically requires standard preprocessing steps such as motion compensation, moving object detection, and object tracking. The errors from the motion compensation step propagate to the object detection stage, resulting in miss-detections, which further complicates the tracking stage, resulting in cluttered and incorrect tracks. In contrast, we propose a novel approach which does not follow the standard steps, and accordingly avoids the aforementioned diffi- culties. Our approach is based on Lagrangian particle trajectories which are a set of dense trajectories obtained by advecting optical flow over time, thus capturing the ensemble motions v of a scene. This is done in frames of unaligned video, and no object detection is required. In order to handle the moving camera, we decompose the trajectories into their camera-induced and object-induced components. Having obtained the relevant object motion trajectories, we compute a compact set of chaotic invariant features, which captures the characteristics of the trajectories. Consequently, a SVM is employed to learn and recognize the human actions using the computed motion features. We performed intensive experiments on multiple benchmark datasets, and obtained promising results. Finally, we consider a more challenging problem referred to as complex event recognition, where the activities of interest are complex and unconstrained. This problem typically pose significant challenges because it involves videos of highly variable content, noise, length, frame size . . . etc. In this extremely challenging task, high-level features have recently shown a promising direction as in [53, 129], where core low-level events referred to as concepts are annotated and modelled using a portion of the training data, then each event is described using its content of these concepts. However, because of the complex nature of the videos, both the concept models and the corresponding high-level features are significantly noisy. In order to address this problem, we propose a novel low-rank formulation, which combines the precisely annotated videos used to train the concepts, with the rich high-level features. Our approach finds a new representation for each event, which is not only low-rank, but also constrained to adhere to the concept annotation, thus suppressing the noise, and maintaining a consistent occurrence of the concepts in each event. Extensive experiments on large scale real world dataset TRECVID Multimedia Event Detection 2011 and 2012 demonstrate that our approach consistently improves the discriminativity of the high-level features by a significant margin.

Page generated in 0.0778 seconds