• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 834
  • 117
  • 77
  • Tagged with
  • 1028
  • 667
  • 665
  • 298
  • 290
  • 290
  • 225
  • 214
  • 161
  • 154
  • 114
  • 83
  • 80
  • 73
  • 72
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

Spatial decomposition of ultrasonic echoes

Sandell, Magnus January 1994 (has links)
The pulse-echo method is one of the most important in ultrasonic imaging. In many areas, including medical applications and nondestructive evaluation, it constitutes one of the fundamental principles for aquiring information about the examined object. An ultrasonic pulse is transmitted into a medium and the reflected pulse is recorded, often by the same transducer. In the area of 3-dimensional imaging, or surface profiling, the distance between the object and the transducer is estimated to be proportional to the time-of-flight (TOF) of the pulse. If the transducer is then moved in a plane parallell to the object, a surface profile can be obtained. Usually some sort of correlation between echoes is performed to estimate their relative difference in TOF. However, this assumes that the shape of the echoes are the same. This is not the case as the shape is dependent on the surface in the neighbourhood of the transducer's symmetry axis and this shape will vary as the transducer is moved across the surface. The change in signal shape will reduce the accuracy of the TOF estimation. A simple example is when the surface has a step. The resulting echo consists of the superposition of two echoes; one from the "top" and one from the "bottom". The TOF estimate will then be almost arbitrary. Another difficulty with pulse-echo imaging is the lateral resolution. The ultrasonic beam is not infinitesimally thin but has a non-neglectable spatial extent, even for focused transducers. This means that two point reflectors separated laterally with only a small distance can not be resolved by ultrasound. The spatial decompostion of the ultrasonic echoes suggested in this licentiate thesis can be used to extract information from the pulse deformation and to reduce the lateral resolution in the following way: * In surface profiling, the surface is modelled as piecewise plane, i.e. the reflected pulse stems from a local plane and perpendicular object. If we instead model the part of the surface that reflects the ultrasonic pulse as a sloping plane there are two advantages. If we can estimate both the distance to, and the slope of, the surface, we can either increase the accuracy or decrease the number of scanning points while maintaining the same accuracy. * To increase the lateral resolution we have to take into account how points off the symmetry axis contribute to the total echo. If we know this, some kind of inverse spatial filter or other method can be constructed in order to improve the resolution. This thesis is comprised of the following five parts: Part A1: (Magnus Sandell and Anders Grennberg)"Spatial decomposition of the ultrasonic echo using a tomographic approach. Part A: The regularization method"We conclude that since the pulse-echo system can be considered linear, i.e.\ the echo from an arbitrary object can be thought of as the sum of the echoes from the contributing points on the surface, it would be very useful to know the echo from a point reflector. By doing this spatial decomposition we can simulate the echo from any object. It is, however, not possible practically to measure the {\em single point echo} (SPE) directly. If the reflector is to be considered pointlike, its size has to be so small that the echo will dissappear in the background noise. If it is increased, there will be spatial smoothing. Instead, we propose an indirect method that uses echoes from sliding halfplanes. This results in measurements with far better SNR and by modifying methods from tomography we can obtain the SPE. An error analysis is performed for the calculated SPE and simulated echoes from sloping halfplanes, using the obtained SPE, are compared with measured ones. Part A2 : (Anders Grennberg and Magnus Sandell)"Experimental determination of the single point echo of an ultrasonic transducer using a tomographic approach"The main ideas of Part A1 are presented in this conference paper. It was presented at the Conference of the IEEE Engineering in Medicine and Biology Society in Paris, France in October 1992. Part B1 : (Anders Grennberg and Magnus Sandell)"Spatial decomposition of the ultrasonic echo using a tomographic approach. Part B: The singular system method"In this part we continue the approach of spatially decomposing the ultrasonic echo. The SPE is again determined from echoes from sliding halfplanes. Here we interpret the SPE and the halfplane echoes to belong to two different weighted Hilbert spaces. These are chosen with regard to the properties of the SPE and the measured echoes. The SPE is supposed to belong to one of these spaces and is mapped by an integral operator to the other space. This is measured but the measurements also contain additive noise. A continuous inverse to this operator does not exist so the problem is ill-posed. A pseudo-inverse to this operator is constructed by using a singular value decomposition (SVD). By decomposing the halfplane echoes with N basis functions from the SVD, the SPE can be found. The spatial decomposition made in this part can be useful to obtain the long-term goals of estimating the slope of a tilted plane and to increase the lateral resolution. Part B2 : (Anders Grennberg and Magnus Sandell)"Experimental determination of the ultrasonic echo from a pointlike reflector using a tomographic approach"This is a contribution to the IEEE 1992 Ultrasonic Symposium in Tucson, USA. It is an extract of Part B1 and deals with the SVD-based inversion of the halfplane echoes. Part C : (Anders Grennberg and Magnus Sandell)"Estimation of subsample time delay differences in narrowbanded ultrasonic echoes using the Hilbert transform correlation"This part deals with a method for increased axial resolution. Using the fact that airborne ultrasonic pulses are narrowbanded, a new algorithm for estimating small time-delay is described. This method can be used in conjuction with a normal TOF-estimator. The latter can make a robust and rough (i.e. within a few samples) estimate and the remaining small time-delay is estimated using our proposed method. Another area of application is an improved averaging algorithm. Airborne ultrasound suffers from a jitter which is caused by air movement and temperature gradients. This jitter can be modelled as a small random time shift. A straightforward averaging will then be a summing of pulses that are not aligned in time which results in a pulse deformation. By estimating the time shift caused by the jitter, all echoes can be time aligned and no pulse deformation will occur when summing them. / Godkänd; 1994; 20080401 (ysko)
132

On Massive MIMO for Massive Machine-Type Communications

Becirovic, Ema January 2020 (has links)
To cover all the needs and requirements of mobile networks in the future, the predicted usage of the mobile networks has been split into three use-cases: enhanced mobile broadband, ultra-reliable low-latency communication, and massive machine-type communication. In this thesis we focus on the massive machine-type communication use-case which is intended to facilitate the ever increasing number of smart devices and sensors. In the massive machine-type communication use-case, the main challenges are to accommodate a huge number of devices while keeping the battery lives of the devices long, and allowing them to be placed in far-away locations. However, these devices are not concerned about other features such as latency, high data rate, or mobility. In this thesis we study the application of massive MIMO (multiple-input multiple-output) technology for the massive machine-type communication use-case. Massive MIMO has been on the radar as an enabler for future communication networks in the last decade and is now firmly rooted in both academia and industry. The main idea of massive MIMO is to utilize a base station with a massive number of antennas which gives the ability to spatially direct signals and serve multiple devices in the same time- and frequency resource. More specifically, in this thesis we study A) a scenario where the base station takes advantage of a device's low mobility to improve its channel estimate, B) a random access scheme for massive machine-type communication which can accommodate a huge number of devices, and C) a case study where the benefits of massive MIMO for long range devices are quantified. The results are that the base station can significantly improve the channel estimates for a low mobility user such that it can tolerate lower SNR while still achieving the same rate. Additionally, the properties of massive MIMO greatly helps to detect users in random access scenarios and increase link-budgets compared to single-antenna base stations.
133

Timing-Based Localization using Multipath Information

Bergström, Andreas January 2020 (has links)
The measurements of radio signals are commonly used for localization purposes where the goal is to determine the spatial position of one or multiple objects. In realistic scenarios, any transmitted radio signal will be affected by the environment through reflections, diffraction at edges and corners etc. This causes a phenomenon known as multipath propagation, by which multiple instances of the transmitted signal having traversed different paths are heard by the receiver. These are known as Multi-Path Components (MPCs). The direct path (DP) between transmitter and receiver may also be occluded, causing what is referred to as non-Line-of-Sight (non-LOS) conditions. As a consequence of these effects, the estimated position of the object(s) may often be erroneous. This thesis focuses on how to achieve better localization accuracy by accounting for the above-mentioned multipath propagation and non-LOS effects. It is proposed how to mitigate these in the context of positioning based on estimation of the DP between transmitter and receiver. It is also proposed how to constructively utilize the additional information about the environment which they implicitly provide. This is all done in a framework wherein a given signal model and a map of the surroundings are used to build a mathematical model of the radio environment, from which the resulting MPCs are estimated. First, methods to mitigate the adverse effects of multipath propagation and non-LOS conditions for positioning based on estimation of the DP between transmitter and receiver are presented. This is initially done by using robust statistical measurement error models based on aggregated error statistics, where significant improvements are obtained without the need to provide detailed received signal information. The gains are seen to be even larger with up-to-date real-time information based on the estimated MPCs. Second, the association of the estimated MPCs with the signal paths predicted by the environmental model is addressed. This leads to a combinatorial problem which is approached with tools from multi-target tracking theory. A rich radio environment in terms of many MPCs gives better localization accuracy but causes the problem size to grow large—something which can be remedied by excluding less probable paths. Simulations indicate that in such environments, the single best association hypothesis may be a reasonable approximation which avoids the calculation of a vast number of possible hypotheses. Accounting for erroneous measurements is crucial but may have drawbacks if no such are occurring. Finally, theoretical localization performance bounds when utilizing all or a subset of the available MPCs are derived. A rich radio environment allows for good positioning accuracy using only a few transmitters/receivers, assuming that these are used in the localization process. In contrast, in a less rich environment where basically only the DP/LOS components are measurable, more transmitters/receivers and/or the combination of downlink and uplink measurements are required to achieve the same accuracy. The receiver’s capability of distinguishing between multiple MPCs arriving approximately at the same time also affects the localization accuracy.
134

Combined RCD, power manager and phase-switcher for electric vehicles charging, controlled by an FPGA / Kombinerad jordfelsbrytare, energiövervakare samt fasväxlare för elbilsladdning, styrd via en FPGA

Hedberg, Daniel, Wetterin, Erik January 2016 (has links)
This master thesis investigates the subsystems required to create a combined residual-current device, power manager and phase-switcher for electric vehicles charging, controlled by an FPGA. The purpose of this task is to create a prototype design for Chargestorm, a company that manufacture charging stations for electrical vehicles and provides a portal for payment. Each subsystem will be separately investigated to see the available alternatives and evaluate which solutions fit this design best. The system is designed to handle currents of 32 A on three phases. The design consists of a hall sensor to detect the residual current, switches to meet the switching requirements and to break the circuit when needed, current transformers to measure current and differential amplifiers to measure voltage. All logic and communication is controlled by an FPGA. Specific isolation requirements are set to prevent the power grid from arcing to the low voltage components. Optocouplers are used to allow communication between the components on the high voltage and the low voltage sides. The final design is placed on a six layer printed circuit board. This is mainly to allow for more copper to conduct the high current and thermal management. Theoretically, the work is complete and all requirements are fulfilled. In practice however, the prototype have not been fully tested and evaluated to see if the theory matches the real world.
135

Power Efficiency of Radar Signal Processing on Embedded Graphics Processing Units (GPUs)

Blomberg, Simon January 2018 (has links)
In recent years the use of graphics processing units for general purpose computation has been increasing. This provides a relatively cheap and easy way of optimizing computation intensive tasks. Although a lot of research has been done on this subject the power aspect of this is not very clear. This thesis treats the implementation and benchmarking of three radar signal processing algorithms for the CPU and GPU of the Jetson Tegra X2 module. The objective was to measure the power consumption and speed of the GPU versus CPU implementations. All three algorithms were most efficiently executed on the GPU both in terms of power consumption and speed. The Space Time Adaptive Processing algorithm presented the biggest speedup and the Corner Turn the smallest. It was found that the both the computation and power efficiency of the GPU implementations was lower for sufficiently small input matrices.
136

Furniture swap : Segmentation and 3D rotation of natural images using deep learning

Bodin, Emanuel January 2021 (has links)
Learning to perceive scenes and objects from 2D images as 3D models is atrivial task for a human but very challenging for a computer. Being ableto retrieve a 3D model from a scene just by taking a picture of it canbe of great use in many fields, for example when making 3D blueprintsfor buildings or working with animations in the game or film industry.Novel view synthesis is a field within deep learning where generativemodels are trained to construct 3D models of scenes or objects from 2Dimages. In this work, the generative model HoloGAN is combined together with aU-net segmentation network. The solution is able to, given an imagecontaining a single object as input, swap that object to another oneand then perform a rotation of the scene, generating new images fromunobserved view points. The segmentation network is trained with pairedsegmentation masks while HoloGAN is able to in an unsupervised mannerlearn 3D metrics of scenes from unlabeled 2D images. The system as awhole is trained on one dataset containing images of cars while theperformance of HoloGAN was evaluated on four additionaldatasets. The chosen method proved to be successful but came with somedrawbacks such as requiring large dataset sizes and being computationalexpensive to train.
137

Impedansanpassning vior : Analytisk studie av viors impedans, 0-10 GHz

Stozinic, Marko January 2020 (has links)
No description available.
138

Practical Consideration on Ultrawideband Synthetic Aperture Radar Data Processing

Vu, Viet Thuy January 2009 (has links)
The practical considerations in ultrawideband (UWB) synthetic aperture radar (SAR) data processing in general and UWB SAR imaging in particular are clarified and presented in detail in this thesis. They are imaging algorithm, impulse response function in SAR imaging (IRF-SAR), apodization, RF interference (RFI) and SAR image quality measurement. Different algorithms in both time- and frequency domain and their suitability to process UWB SAR data are investigated and evaluated. The necessary modifications for these algorithms are proposed to fulfill the requirements of UWB SAR data processing. The time-domain imaging algorithms are highly recommended for UWB SAR data processing due to their characteristics such as integrated motion error compensation, unlimited scene size and local processing. A new IRF-SAR, which is a function of fractional bandwidth and antenna beamwidth, is derived. The function allows us to investigate different UWB SAR systems. Such investigations are not facilitated by currently used IRF-SAR, Sinc functions. The derived IRF-SAR is totally valid to investigate narrowband (NB) SAR systems. A discussion about the apodization techniques and possibilities to apply to UWB SAR data processing is given in this thesis. Handling orthogonal and non-orthogonal sidelobe in UWB SAR imaging is shown to be challenging with the currently used apodization approaches. The linear apodization approaches always result in the loss in resolutions while the phase information can be destroyed by the nonlinear apodization approaches. A new approach to suppress RFI in UWB SAR signal, which is easy to be disturbed by RFI sources, is suggested. The advantages of the approach compared to the others can be found in adaptive and real time processing characteristics. A new definition of SAR image quality measurement is also presented in this thesis. The complicated behavior of IRF-SAR over fractional bandwidth and antenna beamwidth results in the unsuitability of the currently used definition for UWB SAR image quality measurements. The unsuitability is mainly caused by the inappropriate delimitation of mainlobe and sidelobe areas, the fixed broadening factors and the fixed spreading factor of the orthogonal and non-orthogonal sidelobes. Based on these practical considerations, the thesis also presents some possibilities to propose a definition of UWB SAR which is still not available. The beginning investigated results show that these possibilities comply with the UWB definition proposed by Federal Communications Commission (FCC) in 2002.
139

Localization using Magnetometers and Light Sensors

Wahlström, Niklas January 2013 (has links)
Localization is essential in a variety of applications such as navigation systems, aerospace and surface surveillance, robotics and animal migration studies to mention a few. There are many standard techniques available, where the most common are based on information from satellite or terrestrial radio beacons, radar networks or vision systems. In this thesis, two alternative techniques are investigated.The first localization technique is based on one or more magnetometers measuring the induced magnetic field from a magnetic object. These measurements depend on the position and the magnetic signature of the object and can be described with models derived from the electromagnetic theory. For this technology, two applications have been analyzed. The first application is traffic surveillance, which has a high need for robust localization systems. By deploying one or more magnetometer in the vicinity of the traffic lane, vehicles can be detected and classified. These systems can be used for safety purposes, such as detecting wrong-way drivers on highways, as well as for statistical purposes by monitoring the traffic flow. The second application is indoor localization, where a mobile magnetometer measures the stationary magnetic field induced by magnetic structures in indoor environments. In this work, models for such magnetic environments are proposed and evaluated.The second localization technique uses light sensors measuring light intensity during day and night. After registering the time of sunrise and sunset from this data, basic formulas from astronomy can be used to locate the sensor. The main application is localization of small migrating animals. In this work, a framework for localizing migrating birds using light sensors is proposed. The framework has been evaluated on data from a common swift, which during a period of ten months was equipped with a light sensor. / Förmågan att kunna bestämma var ett objekt befinner sig är viktigt inom många olika tillämpningar, till exempel inom flyg- och sjöbevakning, robotik och studier av djurs flyttvägar, för att nämna några.  Det är speciellt önskvärt att kunna utföra denna positionering utan mänsklig inblandning, antingen för att kunna positionerna objekt som en människa inte skulle klara av att göra, eller för att effektivisera arbetet. För att automatiskt bestämma en position behövs sensorer, som mäter olika saker i dess omgivning och omvandlar detta till en elektrisk signal. Med ett datorprogram kan denna elektriska signal i sin tur sedan omvandlas till en position. Det finns många standardteknologier tillgängliga som använder sig av olika typer av sensorer som mäter olika saker. De vanligaste är baserade på satelliternavigering (GPS), radiovågor, radar och kameror.  I denna avhandling har två alternativa teknologier undersökts som i vissa tillämpningar har olika fördelar gentemot standardteknologierna.Den första teknologin för att positionera ett objekt är baserad på en eller flera sensorer som känner av magnetfältet från objekt som innehåller mycket metall, till exempel fordon. Från detta magnetfält kan man bestämma position och även storlek på objektet. Med denna teknologi som grund har två tillämpningar analyserats. Den första tillämpningen är trafikövervakning, där det finns ett stort behov av teknologi som kan bestämma position på bilar. Genom att placera ut en eller flera sensorer längs vägrenen kan man känna av bilar som kommer i närheten. Dessa system kan användas för säkerhetsändamål, som att varna för bilar som kör i fel riktning på motorvägar, eller för statistiska ändamål genom att övervaka trafikflödet. Den andra tillämpningen handlar om att bestämma position för ett objekt i en inomhusmiljö. I många byggnader finns det många objekt som innehåller metall. Dessa objekt omges av ett magnetfält. Genom att i en inomhusmiljö vandra runt med en sensor, så kommer den att känna av olika starka magnetfält beroende på var i byggnaden man befinner sig. I denna avhandling kommer vi undersöka matematiska modeller för att beskriva sådana magnetiska objekt. Den andra teknologin använder ljussensorer för att studera till vilka områden som flyttfåglar flyger. Fågeln utrustas med en ljussensor som mäter ljusstyrka under hela dygnet. Därefter släpps fågeln iväg och förhoppningsvis hittar man den ett år senare igen så att all information från sensorn kan analyseras. Från dessa mätningar kan man i efterhand beräkna vid vilken tidpunkt som soluppgången och solnedgången har inträffat. Därefter kan fågels flyttväg bestämmas med hjälp av formler från astronomin. I detta arbete föreslås en metod för hur denna information kan analyseras. Metoden har utvärderats på data från en tornseglare som under en period på tio månader flyttat till Afrika och sedan tillbaka till Sverige igen.
140

A Variational Approach to Image Diffusion in Non-Linear Domains

Åström, Freddie January 2013 (has links)
Image filtering methods are designed to enhance noisy images captured in situations that are problematic for the camera sensor. Such noisy images originate from unfavourable illumination conditions, camera motion, or the desire to use only a low dose of ionising radiation in medical imaging. Therefore, in this thesis work I have investigated the theory of partial differential equations (PDE) to design filtering methods that attempt to remove noise from images. This is achieved by modeling and deriving energy functionals which in turn are minimized to attain a state of minimum energy. This state is obtained by solving the so called Euler-Lagrange equation. An important theoretical contribution of this work is that conditions are put forward determining when a PDE has a corresponding energy functional. This is in particular described in the case of the structure tensor, a commonly used tensor in computer vision.A primary component of this thesis work is to model adaptive image filtering such that any modification of the image is structure preserving, but yet is noise suppressing. In color image filtering this is a particular challenge since artifacts may be introduced at color discontinuities. For this purpose a non-Euclidian color opponent transformation has been analysed and used to separate the standard RGB color space into uncorrelated components.A common approach to achieve adaptive image filtering is to select an edge stopping function from a set of functions that have proven to work well in the past. The purpose of the edge stopping function is to inhibit smoothing of image features that are desired to be retained, such as lines, edges or other application dependent characteristics. Thus, a step from ad-hoc filtering based on experience towards an application-driven filtering is taken, such that only desired image features are processed. This improves what is characterised as visually relevant features, a topic which this thesis covers, in particular for medical imaging.The notion of what are relevant features is a subjective measure may be different from a layman's opinion compared to a professional's. Therefore, we advocate that any image filtering method should yield an improvement not only in numerical measures but also a visual improvement should be experienced by the respective end-user / NACIP, VIDI, GARNICS

Page generated in 0.1383 seconds