• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 27
  • 6
  • 5
  • 3
  • 2
  • 2
  • 1
  • Tagged with
  • 54
  • 54
  • 16
  • 12
  • 10
  • 10
  • 9
  • 9
  • 8
  • 8
  • 8
  • 7
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Performance Analysis of Non Local Means Algorithm using Hardware Accelerators

Antony, Daniel Sanju January 2016 (has links) (PDF)
Image De-noising forms an integral part of image processing. It is used as a standalone algorithm for improving the quality of the image obtained through camera as well as a starting stage for image processing applications like face recognition, super resolution etc. Non Local Means (NL-Means) and Bilateral Filter are two computationally complex de-noising algorithms which could provide good de-noising results. Due to its computational complexity, the real time applications associated with these letters are limited. In this thesis, we propose the use of hardware accelerators such as GPU (Graphics Processing Units) and FPGA (Field Programmable Gate Arrays) to speed up the filter execution and efficiently implement using them. GPU based implementation of these letters is carried out using Open Computing Language (Open CL). The basic objective of this research is to perform high speed de-noising without compromising on the quality. Here we implement a basic NL-Means filter, a Fast NL-Means filter, and Bilateral filter using Gauss Polynomial decomposition on GPU. We also propose a modification to the existing NL-Means algorithm and Gauss Polynomial Bilateral filter. Instead of Gaussian Spatial Kernel used in standard algorithm, Box Spatial kernel is introduced to improve the speed of execution of the algorithm. This research work is a step forward towards making the real time implementation of these algorithms possible. It has been found from results that the NL-Means implementation on GPU using Open CL is about 25x faster than regular CPU based implementation for larger images (1024x1024). For Fast NL-Means, GPU based implementation is about 90x faster than CPU implementation. Even with the improved execution time, the embedded system application of the NL-Means is limited due to the power and thermal restrictions of the GPU device. In order to create a low power and faster implementation, we have implemented the algorithm on FPGA. FPGAs are reconfigurable devices and enable us to create a custom architecture for the parallel execution of the algorithm. It was found that the execution time for smaller images (256x256) is about 200x faster than CPU implementation and about 25x faster than GPU execution. Moreover the power requirements of the FPGA design of the algorithm (0.53W) is much less compared to CPU(30W) and GPU(200W).
42

Dynamický model nelineárního oscilátoru s piezoelektrickou vrstvou / Dynamic model of nonlinear oscillator with piezoelectric layer

Sosna, Petr January 2021 (has links)
Tato diplomová práce je zaměřena na analýzu chování magnetopiezoelastického kmitajícího nosníku. V teoretické části jsou odvozeny diskretizované parametry, které popisují reálnou soustavu jako model s jedním stupněm volnosti. Tento model je následně použit pro kvalitativní i kvantitativní analýzu chování tohoto harvesteru. Frekvenční odezva harmonicky buzeného systému je zkoumána v dvouparametrické nebo tříparametrické analýze v závislosti na amplitudě buzení, elektrické zátěži a vzdálenosti mezi magnety. Posledně zmíněný parametr je v práci tím hlavním, proto je vliv vzdálenosti magnetů zkoumán také s pomocí bifurkačních diagramů. Tyto diagramy byly navíc použity k vytvoření oscilační "mapy", která pro každé zatěžovací podmínky ukazuje, jakou vzdálenost magnetů je třeba nastavit, aby bylo generováno nejvíce energie. Práce je doplněna o ukázky několika jevů, které mohou značně ovlivnit chování systému, pokud se nejdená o čistě harmonické buzení.
43

Map-aided localization for autonomous driving using a particle filter

Eriksson, Simon January 2020 (has links)
Vehicles losing their GPS signal is a considerable issue for autonomous vehicles and can be a danger to people in their vicinity. To circumvent this issue, a particle filter localization technique using pre-generated offline Open Street Map (OSM) maps was investigated in a software simulation of Scania’s heavy-duty trucks. The localization technique runs in real-time and provides a way to localize the vehicle safely if the starting position is known. Access to global localization was limited, and the particle filter still succeeded in localizing the vehicle in the vicinity of the correct road segment by creating a graph of the map information and matching the trajectory to the vehicle’s sensor data. The mean error of the Particle filter localization technique in optimal conditions is 16m, which is 20% less than an optimally tuned dead reckoning solution. The mean error is about 50% larger compared to a Global Positioning System. The final product shows potential for expansion but requires more investigation to allow for real-world deployment. / Att fordon kan mista sin GPS-signal är ett stort problem för autonoma fordon och kan vara en fara för människor i dess närhet. För att undvika detta problem föreslås en icke-global lokaliseringsteknik som använder Open Street Maps-kartor (OSM) och ett partikelfilter för att lokalisera fordonet i en mjukvarusimulation. Implementering körs i realtid och anger fordonets position med en tillräcklig träffsäkerhet för att det inte ska utgöra någon fara om dess startposition är känd. Globala lokaliseringsmöjligheter var begränsade, och partikelfiltret lyckades lokalisera fordonet till rätt vägsegment genom att konstruera en graf över den kartinformation den läst in och para ihop fordonets nuvarande färdväg med denna. Resultatet ger en lösning som optimalt har ett medelfel på 16m, vilket är 20% mindre än medelfelet jämfört med optimiserad dödräkning. Lösningen har ca 50% större medelfel än positionering med GPS. Slutresultatet visar en potential att användas i verkliga situationer, men kräver mer undersökningar.
44

Analysis of the effects of phase noise and frequency offset in orthogonal frequency division multiplexing (OFDM) systems

Erdogan, Ahmet Yasin 03 1900 (has links)
Approved for public release, distribution is unlimited / Orthogonal frequency division multiplexing (OFDM) is being successfully used in numerous applications. It was chosen for IEEE 802.11a wireless local area network (WLAN) standard, and it is being considered for the fourthgeneration mobile communication systems. Along with its many attractive features, OFDM has some principal drawbacks. Sensitivity to frequency errors is the most dominant of these drawbacks. In this thesis, the frequency offset and phase noise effects on OFDM based communication systems are investigated under a variety of channel conditions covering both indoor and outdoor environments. The simulation performance results of the OFDM system for these channels are presented. / Lieutenant Junior Grade, Turkish Navy
45

Signal transmission in stochastic neuron models with non-white or non-Gaussian noise

Droste, Felix 02 September 2015 (has links)
Die vorliegende Arbeit befasst sich mit dem Einfluss von nicht-weißem oder nicht-Gauß’schem synaptischen Rauschen auf die Informationsübertragung in stochastischen Neuronenmodellen. Ziel ist es, zu verstehen, wie eine Nervenzelle ein Signal in ihrer Pulsaktivität kodiert. Synaptisches Rauschen beschreibt hier den Einfluss anderer Nervenzellen, die nicht das interessierende Signal tragen, aber seine Übertragung durch ihre synaptische Wirkung auf die betrachtete Zelle beeinflussen. In stochastischen Neuronenmodellen wird diese Hintergrundaktivität durch einen stochastischen Prozess mit geeigneter Statistik beschrieben. Ist die Rate, mit der präsynaptische Pulse auftreten, hoch und zeitlich konstant, die Wirkung einzelner Pulse aber verschwindend gering, so wird das synaptische Rauschen durch einen Gauß’schen Prozess beschrieben. Oft wird zudem angenommen, dass das Rauschen unkorreliert (weiß) ist. In dieser Arbeit wird neuronale Signalübertragung in dem Fall untersucht, dass eine solche Näherung nicht mehr gerechtfertigt ist, d.h. wenn der synaptische Hintergrund durch einen stochastischen Prozess beschrieben werden muss, der nicht weiß, nicht Gauß’sch, oder weder weiß noch Gauß’sch ist. Mittels Simulationen und analytischer Rechnungen werden drei Szenarien behandelt: Zunächst betrachten wir eine Zelle, die nicht ein, sondern zwei Signale empfängt, welche zusätzlich durch synaptische Kurzzeitplastizität gefiltert werden. In diesem Fall muss der Hintergrund durch ein farbiges Rauschen beschrieben werden. Im zweiten Szenario betrachten wir den Fall, dass der Effekt einzelner Pulse nicht mehr als schwach angenommen werden kann. Das Rauschen ist dann nicht mehr Gauß’sch, sondern ein Schrotrauschen. Schließlich untersuchen wir den Einfluss einer präsynaptischen Population, deren Feuerrate nicht zeitlich konstant ist, sondern zwischen Phasen hoher und niedriger Aktivität, sogenannten up und down states, springt. In diesem Fall ist das Rauschen weder weiß noch Gauß’sch. / This thesis is concerned with the effect of non-white or non-Gaussian synaptic noise on the information transmission properties of single neurons. Synaptic noise subsumes the massive input that a cell receives from thousands of other neurons. In the framework of stochastic neuron models, this input is described by a stochastic process with suitably chosen statistics. If the overall arrival rate of presynaptic action potentials is high and constant in time and if each individual incoming spike has only a small effect on the dynamics of the cell, the massive synaptic input can be modeled as a Gaussian process. For mathematical tractability, one often assumes that furthermore, the input is devoid of temporal structure, i.e. that it is well described by a Gaussian white noise. This is the so-called diffusion approximation (DA). The present thesis explores neuronal signal transmission when the conditions that underlie the DA are no longer met, i.e. when one must describe the synaptic background activity by a stochastic process that is not white, not Gaussian, or neither. We explore three distinct scenarios by means of simulations and analytical calculations: First, we study a cell that receives not one but two signals, additionally filtered by synaptic short-term plasticity (STP), so that the background has to be described by a colored noise. The second scenario deals with synaptic weights that cannot be considered small; here, the effective noise is no longer Gaussian and the shot-noise nature of the input has to be taken into account. Finally, we study the effect of a presynaptic population that does not fire at a rate which is constant in time but instead undergoes transitions between states of high and low activity, so-called up and down states.
46

Experimental Studies On A New Class Of Combinatorial LDPC Codes

Dang, Rajdeep Singh 05 1900 (has links)
We implement a package for the construction of a new class of Low Density Parity Check (LDPC) codes based on a new random high girth graph construction technique, and study the performance of the codes so constructed on both the Additive White Gaussian Noise (AWGN) channel as well as the Binary Erasure Channel (BEC). Our codes are “near regular”, meaning thereby that the the left degree of any node in the Tanner graph constructed varies by at most 1 from the average left degree and so also the right degree. The simulations for rate half codes indicate that the codes perform better than both the regular Progressive Edge Growth (PEG) codes which are constructed using a similar random technique, as well as the MacKay random codes. For high rates the ARG (Almost Regular high Girth) codes perform better than the PEG codes at low to medium SNR’s but the PEG codes seem to do better at high SNR’s. We have tried to track both near codewords as well as small weight codewords for these codes to examine the performance at high rates. For the binary erasure channel the performance of the ARG codes is better than that of the PEG codes. We have also proposed a modification of the sum-product decoding algorithm, where a quantity called the “node credibility” is used to appropriately process messages to check nodes. This technique substantially reduces the error rates at signal to noise ratios of 2.5dB and beyond for the codes experimented on. The average number of iterations to achieve this improved performance is practically the same as that for the traditional sum-product algorithm.
47

Modelování rušení pro xDSL / Interference modelling for xDSL

Čermák, Josef January 2008 (has links)
This work is focused on the subject of the interference modelling for xDSL technologies. First, the xDSL technologies are explained. Following is the presentation and description of the different kinds of the xDSL technologies. The next part deals with the basic parameters of metallic cable lines – especially the primary and secondary parameters. Nowadays wider bandwidths are used for the achievement of higher data transmission rates. During a higher frequency signal transmission a more intensive line attenuation appears. To identify the transfer characteristics of the lines while using an xDSL system, mathematic models of transmission lines are applied. That is why these mathematic models are dealt with in the next chapter. At the end of this section the mathematic models are compared using the modular and phase characteristics. The main aim of the work is to describe the different impacts which influence the efficiency of the xDSL systems. First, the causes interfering from the inside of the cable are deeply explained: Near End Crosstalk (NEXT), Far End Crosstalk (FEXT), Additive White Gaussian Noise (AWGN). Following is the explanation of the external interfering impacts: Radio Frequency Interference (RFI) and Impulse Noise. The next goal of this thesis is a design of a workstation for the tests of spectral features and the efficiency of the xDSL systems. The work also presents a designed GUI application and its description. The GUI application is an instrument for the choice or data entry of the final interference. The last chapter describes a realization of a measurement and shows the measured characteristics which were recorded on the ADSL tester and oscilloscope.
48

Analýza a modelování přeslechů / Crosstalk analysis and modelling

Novotný, František January 2013 (has links)
The thesis concerns the problem of interference modelling for xDSL technologies and Ethernet. The introduction describes the origin of crosstalk, that arise during the operation of the systems and the physical properties of the lines, therefore, the next section describes the properties of the primary and secondary parameters of the homogenous line and their modelling. In order to achieve higher data rates on the metallic line, systems with larger frequency spectrum are applied, resulting in a greater attenuation of the line. This issue and the characteristics determination of the transmission systems are subjects of the mathematical models, which are divided according to the modelling of primary or secondary parameters. The main goal of this work is to describe the effects which influence the performance of data transfer via xDSL and Ethernet technology focusing on internal and external disturbances acting on the cable lines. This is the crosstalk at the near and far end, adaptive white noise, radio frequency interference RFI and impulse noise. Following part of the thesis deals with the properties of xDSL technologies, specifically ADSL2+ and VDSL2 and Ethernet. Another aim is to design applications which enable to test the performance of xDSL and Ethernet transmission systems with its own award simulations interference. The conclusion describes the design and implementation of laboratory experiments for measuring of the efficiency and spectral properties of xDSL. The proposed laboratory protocols are annexed to this thesis, including the measured waveforms.
49

Segmentace obrazu pomocí neuronové sítě / Neural Network Based Image Segmentation

Jamborová, Soňa January 2011 (has links)
This work is about suggestion of the software for neural network based image segmentation. It defines basic terms for this topics. It is focusing mainly at preperation imaging information for image segmentation using neural network. It describes and compares different aproaches for image segmentation.
50

Methods for image restoration and segmentation by sparsity promoting energy minimization / Методе за рестаурацију и сегментацију дигиталне слике засноване наминимизацији функције енергије која фаворизује ретке репрезентацијесигнала / Metode za restauraciju i segmentaciju digitalne slike zasnovane naminimizaciji funkcije energije koja favorizuje retke reprezentacijesignala

Bajić Papuga Buda 16 September 2019 (has links)
<p>Energy minimization approach is widely used in image processing applications.<br />Many image processing problems can be modelled in a form of a minimization<br />problem. This thesis deals with two crucial tasks of image analysis workflows:<br />image restoration and segmentation of images corrupted by blur and noise. Both<br />image restoration and segmentation are modelled as energy minimization<br />problems, where energy function is composed of two parts: data fidelity term and<br />regularization term. The main contribution of this thesis is development of new<br />data fidelity and regularization terms for both image restoration and<br />segmentation tasks.<br />Image restoration methods (non-blind and blind deconvolution and superresolution<br />reconstruction) developed within this thesis are suited for mixed<br />Poisson-Gaussian noise which is encountered in many realistic imaging<br />conditions. We use generalized Anscombe variance stabilization transformation<br />for removing signal-dependency of noise. We propose novel data fidelity term<br />which incorporates variance stabilization transformation process into account.<br />Turning our attention to the regularization term for image restoration, we<br />investigate how sparsity promoting regularization in the gradient domain<br />formulated as Total Variation, can be improved in the presence of blur and mixed<br />Poisson-Gaussian noise. We found that Huber potential function leads to<br />significant improvement of restoration performance.<br />In this thesis we propose new segmentation method, the so called coverage<br />segmentation, which estimates the relative coverage of each pixel in a sensed<br />image by each image component. Its data fidelity term takes into account<br />blurring and down-sampling processes and in that way it provides robust<br />segmentation in the presence of blur, allowing at the same time segmentation at<br />increased spatial resolution. In addition, new sparsity promoting regularization<br />terms are suggested: (i) Huberized Total Variation which provides smooth object<br />boundaries and noise removal, and (ii) non-edge image fuzziness, which<br />responds to an assumption that imaged objects are crisp and that fuzziness is<br />mainly due to the imaging and digitization process.<br />The applicability of here proposed restoration and coverage segmentation<br />methods is demonstrated for Transmission Electron Microscopy image<br />enhancement and segmentation of micro-computed tomography and<br />hyperspectral images.</p> / <p>Поступак минимизације функције енергије је често коришћен за<br />решавање проблема у обради дигиталне слике. Предмет истраживања<br />тезе су два круцијална задатка дигиталне обраде слике: рестаурација и<br />сегментација слика деградираних шумом и замагљењем. И рестaурација<br />и сегментација су моделовани као проблеми минимизације функције<br />енергије која представља збир две функције: функције фитовања<br />података и регуларизационе функције. Главни допринос тезе је развој<br />нових функција фитовања података и нових регуларизационих функција<br />за рестаурацију и сегментацију.<br />Методе за рестаурацију (оне код којих је функција замагљења позната и<br />код којих је функцију замагљења потребно оценити на основу датих<br />података као и методе за реконструкцију слике у супер-резолуцији)<br />развијене у оквиру ове тезе третирају мешавину Поасоновог и Гаусовог<br />шума који се појављује у многобројним реалистичним сценаријима. За<br />третирање такве врсте шума користили смо нелинеарну трансформацију<br />и предложили смо нову функцију фитовања података која узима у обзир<br />такву трансформацију. У вези са регуларизационим функцијама смо<br />тестирали хипотезу да се функција Тоталне Варијације која промовише<br />ретку слику у градијентном домену може побољшати уколико се користе<br />тзв. потенцијалне функције. Показали смо да се употребом Хуберове<br />потенцијалне функције може значајно побољшати квалитет рестауриране<br />слике која је деградирана замагљењем и мешавином Поасоновог и<br />Гаусовог шума.<br />У оквиру тезе смо предложили нову методу сегментације која допушта<br />делимичну покривеност пиксела објектом. Функција фитовања података<br />ове методе укључује и модел замагљења и смањења резолуције. На тај<br />начин је постигнута робустност сегментације у присуству замагљења и<br />добијена могућност сегментирања слике у супер-резолуцији. Додатно,<br />нове регуларизационе функције које промовишу ретке репрезентације<br />слике су предложене.<br />Предложене методе рестаурације и сегментације која допушта делимичну<br />покривеност пиксела објектом су примењене на слике добијене помоћу<br />електронског микроскопа, хиперспектралне слике и медицинске ЦТ слике.</p> / <p>Postupak minimizacije funkcije energije je često korišćen za<br />rešavanje problema u obradi digitalne slike. Predmet istraživanja<br />teze su dva krucijalna zadatka digitalne obrade slike: restauracija i<br />segmentacija slika degradiranih šumom i zamagljenjem. I restauracija<br />i segmentacija su modelovani kao problemi minimizacije funkcije<br />energije koja predstavlja zbir dve funkcije: funkcije fitovanja<br />podataka i regularizacione funkcije. Glavni doprinos teze je razvoj<br />novih funkcija fitovanja podataka i novih regularizacionih funkcija<br />za restauraciju i segmentaciju.<br />Metode za restauraciju (one kod kojih je funkcija zamagljenja poznata i<br />kod kojih je funkciju zamagljenja potrebno oceniti na osnovu datih<br />podataka kao i metode za rekonstrukciju slike u super-rezoluciji)<br />razvijene u okviru ove teze tretiraju mešavinu Poasonovog i Gausovog<br />šuma koji se pojavljuje u mnogobrojnim realističnim scenarijima. Za<br />tretiranje takve vrste šuma koristili smo nelinearnu transformaciju<br />i predložili smo novu funkciju fitovanja podataka koja uzima u obzir<br />takvu transformaciju. U vezi sa regularizacionim funkcijama smo<br />testirali hipotezu da se funkcija Totalne Varijacije koja promoviše<br />retku sliku u gradijentnom domenu može poboljšati ukoliko se koriste<br />tzv. potencijalne funkcije. Pokazali smo da se upotrebom Huberove<br />potencijalne funkcije može značajno poboljšati kvalitet restaurirane<br />slike koja je degradirana zamagljenjem i mešavinom Poasonovog i<br />Gausovog šuma.<br />U okviru teze smo predložili novu metodu segmentacije koja dopušta<br />delimičnu pokrivenost piksela objektom. Funkcija fitovanja podataka<br />ove metode uključuje i model zamagljenja i smanjenja rezolucije. Na taj<br />način je postignuta robustnost segmentacije u prisustvu zamagljenja i<br />dobijena mogućnost segmentiranja slike u super-rezoluciji. Dodatno,<br />nove regularizacione funkcije koje promovišu retke reprezentacije<br />slike su predložene.<br />Predložene metode restauracije i segmentacije koja dopušta delimičnu<br />pokrivenost piksela objektom su primenjene na slike dobijene pomoću<br />elektronskog mikroskopa, hiperspektralne slike i medicinske CT slike.</p>

Page generated in 0.0791 seconds