• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2186
  • 383
  • 258
  • 136
  • 75
  • 62
  • 52
  • 31
  • 21
  • 11
  • 11
  • 11
  • 11
  • 11
  • 11
  • Tagged with
  • 4069
  • 4069
  • 970
  • 766
  • 694
  • 670
  • 627
  • 438
  • 403
  • 378
  • 363
  • 331
  • 300
  • 255
  • 253
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Simulation and Analysis of Ultrasonic Wave Propagation in Pre-stressed Screws

Andrén, Erik January 2019 (has links)
The use of ultrasound to measure preload in screws and bolts has been studied quite frequently the last decades. The technique is based on establishing a relationship between preload and change in time of flight (TOF) for an ultrasonic pulse propagating back and forth through a screw. This technique has huge advantages compared to other methods such as torque and angle tightening, mainly because of its independence of friction. This is of great interest for Atlas Copco since it increases the accuracy and precision of their assembly tools. The purpose of this thesis was to investigate ultrasonic wave propagation in pre-stressed screws using a simulation software, ANSYS, and to analyse the results using signal processing. The simulations were conducted in order to get an understanding about the wavefront distortion effects that arise. Further, an impulse response of the system was estimated with the purpose of dividing the multiple echoes that occur from secondary propagation paths from one other. The results strengthen the hypothesis that the received echoes are superpositions of reflections taking different propagation paths through the screw. An analytical estimation of the wavefront curvature also shows that the wavefront distortion due to a higher stress near the screw boundaries can be neglected. Additionally, a compressed sensing technique has been used to estimate the impulse response of the screw. The estimated impulse response models the echoes as superpositions of secondary echoes, with significant taps corresponding to the TOF of the shortest path and a mode-converted echo. The method is also shown to be stable in noisy environments. The simulation model gives rise to a slower speed of sound than expected, which most likely is due to the fact that finite element analysis in general overestimates the stiffness of the model.
82

A polynomial phase model for estimation of underwater acoustic channels using superimposed pilots

Trulsson, Felix January 2019 (has links)
In underwater acoustic communications the time variation in the channel is a huge chal- lenge. The estimation of the impulse response at the receiver is crucial for the decoding of the signal to become accurate. One way is to transmit a superimposed pilot sequence along the unknown message, and by the knowledge of the sequence have the possibility to continuously track the variation in the channel over time. This thesis investigates if it is possible by the aid of superimposed pilot sequences to separate the taps in the channel impulse response and using a parametric method to describe the taps as polynomial phase signals. The method used for separation of the taps was a moving least squares estimator. Thereafter each tap was optimised to a polynomial phase signal (PPS) using a weighted non-linear least squares estimator. The non-linear parameters of the model was then determined with the Levenberg-Marquardt method. The performance of the method was evaluated both for simulated data as well as for data from eld tests. The performance was determined by calculating the mean squared error (MSE) of the model over dierent frame lengths, signal to noise ratio (SNR), weights for the superimposed pilots, rapidness of time variation and impulse response lengths. The method was not sensitive to the properties of the channel. Even though the model had high performance, the complexity of the computations generated long compilation times. Hence, the method needs further work before a real time implementation could be possible.
83

Measurement system for low frequency and low amplitude AC voltage of given frequency

Kaltenböck, Viktor January 2019 (has links)
This work is about digital signal processing methods to be used to determine information of low frequency low amplitude signals of known frequency. Different adaptive filter concepts such as Wiener filter, NLMS filter and lock-in are implemented and compared to each other. The comparison carried out for different input signal amplitude and noise variance with the objective to find the best algorithm for noise cancelling. The comparison is done using a signal of interest combined with white noise as input to the filter element. The aim of the comparison is to find the most appropriate filter for further signal analyzis. The key topics for the evaluation are the efficiency of noise cancelling and ease of implementation in a data processing unit.
84

A Study of Myoelectric Signal Processing

Liu, Lukai 17 January 2016 (has links)
This dissertation of various aspects of electromyogram (EMG: muscle electrical activity) signal processing is comprised of two projects in which I was the lead investigator and two team projects in which I participated. The first investigator-led project was a study of reconstructing continuous EMG discharge rates from neural impulses. Related methods for calculating neural firing rates in other contexts were adapted and applied to the intramuscular motor unit action potential train firing rate. Statistical results based on simulation and clinical data suggest that performances of spline-based methods are superior to conventional filter-based methods in the absence of decomposition error, but they unacceptably degrade in the presence of even the smallest decomposition errors present in real EMG data, which is typically around 3-5%. Optimal parameters for each method are found, and with normal decomposition error rates, ranks of these methods with their optimal parameters are given. Overall, Hanning filtering and Berger methods exhibit consistent and significant advantages over other methods. In the second investigator-led project, the technique of signal whitening was applied prior to motion classification of upper limb surface EMG signals previously collected from the forearm muscles of intact and amputee subjects. The motions classified consisted of 11 hand and wrist actions pertaining to prosthesis control. Theoretical models and experimental data showed that whitening increased EMG signal bandwidth by 65-75% and the coefficients of variation of temporal features computed from the EMG were reduced. As a result, a consistent classification accuracy improvement of 3-5% was observed for all subjects at small analysis durations (< 100 ms). In the first team-based project, advanced modeling methods of the constant posture EMG-torque relationship about the elbow were studied: whitened and multi-channel EMG signals, training set duration, regularized model parameter estimation and nonlinear models. Combined, these methods reduced error to less than a quarter of standard techniques. In the second team-based project, a study related biceps-triceps surface EMG to elbow torque at seven joint angles during constant-posture contractions. Models accounting for co-contraction estimated that individual flexion muscle torques were much higher than models that did not account for co-contraction.
85

Geometric Autoconfiguration for Precision Personnel Location

Woodacre, Benjamin W 05 May 2010 (has links)
The goal of a radio-based precision personnel location system is to determine the position of a mobile user, to within a desired accuracy, based on signals propagated between that user and fixed stations. In emergency response situations such information would assist search and rescue operations and provide improved situational awareness. Fundamentally location estimation is based upon the signal measured at, and the position of, each receiver. In the case of a location system where such receivers are installed on vehicles, such as for fire trucks, no external infrastructure or prior characterization of the area of operations can be assumed and the estimation of the (relative) positions of the receiving stations must be repeated each time the system is deployed at a new site as this results in the geometry of the receiving antennas being changed. This dissertation presents work towards an accurate and automatic method for determination of the geometric configuration of such receiving stations based on sampled frequency data using both a "classical" ranging method and a novel technique based on a singular value decomposition method for multilateralization. We also compare the performance of our approaches to the Cramer- Rao bound for total antenna location error for distance and frequency-data based estimators, and provide experimental performance results for these methods tested in real multipath environments.
86

Study of a recursive method for matrix inversion via signal processing experiments

Ganjidoost, Mohammad January 2010 (has links)
Typescript, etc. / Digitized by Kansas Correctional Industries
87

Beyond ICA: advances on temporal BYY learning, state space modeling and blind signal processing. / CUHK electronic theses & dissertations collection

January 2000 (has links)
by Yiu-ming Cheung. / "July 2000." / Thesis (Ph.D.)--Chinese University of Hong Kong, 2000. / Includes bibliographical references (p. 98-106). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Mode of access: World Wide Web. / Abstracts in English and Chinese.
88

Reconstruction of multiple point sources by employing a modified Gerchberg-Saxton iterative algorithm

Habool Al-Shamery, Maitham January 2018 (has links)
Digital holograms has been developed an used in many applications. They are a technique by which a wavefront can be recorded and then reconstructed, often even in the absence of the original object. In this project, we use digital holography methods in which the original object amplitude and phase are recorded numerically, which would allow these data be downloaded to a spatial light modulator (SLM).This provides digital holography with capabilities that are not available using optical holographic methods. The digital holographically reconstructed image can be refocused to different depths depending on the reconstruction distance. This remarkable aspect of digital holography as can be useful in many applications and one of the most beneficial applications is when it is used for the biological cell studies. In this research, point source digital in-line and off-axis digital holography with a numerical reconstruction has been studied. The point source hologram can be used in many biological applications. As the original object we use the binary amplitude Fresnel zone plate which is made by rings with an alternating opaque and transparent transmittance. The in-line hologram of a spherical wave of wavelength, λ, emanating from the point source is initially employed in the project. Also, we subsequently employ an off-axis point source in which the original point-source object is translated away from original on-axis location. Firstly, we create the binary amplitude Fresnel zone plate (FZP) which is considered the hologram of the point source. We determine a phase-only digital hologram calculation technique for the single point source object. We have used a modified Gerchberg-Saxton algorithm (MGSA) instead of the non-iterative algorithm employed in classical analogue holography. The first complex amplitude distribution, i(x, y), is the result of the Fourier transform of the point source phase combined with a random phase. This complex filed distribution is the input of the iteration process. Secondly, we propagate this light field by using the Fourier transform method. Next we apply the first constraint by modifying the amplitude distribution, that is by replacing it with the measured modulus and keeping the phase distribution unchanged. We use the root mean square error (RMSE) criterion between the reconstructed field and the target field to control the iteration process. The RMSE decreases at each iteration, giving rise to an error-reduction in the reconstructed wavefront. We then extend this method to the reconstruction of multiple points sources. Thus the overall aim of this thesis has been to create an algorithm that is able to reconstruct the multi-point source objects from only their modulus. The method could then be used for biological microscopy applications in which it is necessary to determine the position of a fluorescing source from within a volume of biological tissue.
89

An optimization framework for fixed-point digital signal processing.

January 2003 (has links)
Lam Yuet Ming. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2003. / Includes bibliographical references (leaves 80-86). / Abstracts in English and Chinese. / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Motivation --- p.1 / Chapter 1.1.1 --- Difficulties of fixed-point design --- p.1 / Chapter 1.1.2 --- Why still fixed-point? --- p.2 / Chapter 1.1.3 --- Difficulties of converting floating-point to fixed-point --- p.2 / Chapter 1.1.4 --- Why wordlength optimization? --- p.3 / Chapter 1.2 --- Objectives --- p.3 / Chapter 1.3 --- Contributions --- p.3 / Chapter 1.4 --- Thesis Organization --- p.4 / Chapter 2 --- Review --- p.5 / Chapter 2.1 --- Introduction --- p.5 / Chapter 2.2 --- Simulation approach to address quantization issue --- p.6 / Chapter 2.3 --- Analytical approach to address quantization issue --- p.8 / Chapter 2.4 --- Implementation of speech systems --- p.9 / Chapter 2.5 --- Discussion --- p.10 / Chapter 2.6 --- Summary --- p.11 / Chapter 3 --- Fixed-point arithmetic background --- p.12 / Chapter 3.1 --- Introduction --- p.12 / Chapter 3.2 --- Fixed-point representation --- p.12 / Chapter 3.3 --- Fixed-point addition/subtraction --- p.14 / Chapter 3.4 --- Fixed-point multiplication --- p.16 / Chapter 3.5 --- Fixed-point division --- p.18 / Chapter 3.6 --- Summary --- p.20 / Chapter 4 --- Fixed-point class implementation --- p.21 / Chapter 4.1 --- Introduction --- p.21 / Chapter 4.2 --- Fixed-point simulation using overloading --- p.21 / Chapter 4.3 --- Fixed-point class implementation --- p.24 / Chapter 4.3.1 --- Fixed-point object declaration --- p.24 / Chapter 4.3.2 --- Overload the operators --- p.25 / Chapter 4.3.3 --- Arithmetic operations --- p.26 / Chapter 4.3.4 --- Automatic monitoring of dynamic range --- p.27 / Chapter 4.3.5 --- Automatic calculation of quantization error --- p.27 / Chapter 4.3.6 --- Array supporting --- p.28 / Chapter 4.3.7 --- Cosine calculation --- p.28 / Chapter 4.4 --- Summary --- p.29 / Chapter 5 --- Speech recognition background --- p.30 / Chapter 5.1 --- Introduction --- p.30 / Chapter 5.2 --- Isolated word recognition system overview --- p.30 / Chapter 5.3 --- Linear predictive coding processor --- p.32 / Chapter 5.3.1 --- The LPC model --- p.32 / Chapter 5.3.2 --- The LPC processor --- p.33 / Chapter 5.4 --- Vector quantization --- p.36 / Chapter 5.5 --- Hidden Markov model --- p.38 / Chapter 5.6 --- Summary --- p.40 / Chapter 6 --- Optimization --- p.41 / Chapter 6.1 --- Introduction --- p.41 / Chapter 6.2 --- Simplex Method --- p.41 / Chapter 6.2.1 --- Initialization --- p.42 / Chapter 6.2.2 --- Reflection --- p.42 / Chapter 6.2.3 --- Expansion --- p.44 / Chapter 6.2.4 --- Contraction --- p.44 / Chapter 6.2.5 --- Stop --- p.45 / Chapter 6.3 --- One-dimensional optimization approach --- p.45 / Chapter 6.3.1 --- One-dimensional optimization approach --- p.46 / Chapter 6.3.2 --- Search space reduction --- p.47 / Chapter 6.3.3 --- Speeding up convergence --- p.48 / Chapter 6.4 --- Summary --- p.50 / Chapter 7 --- Word Recognition System Design Methodology --- p.51 / Chapter 7.1 --- Introduction --- p.51 / Chapter 7.2 --- Framework design --- p.51 / Chapter 7.2.1 --- Fixed-point class --- p.52 / Chapter 7.2.2 --- Fixed-point application --- p.53 / Chapter 7.2.3 --- Optimizer --- p.53 / Chapter 7.3 --- Speech system implementation --- p.54 / Chapter 7.3.1 --- Model training --- p.54 / Chapter 7.3.2 --- Simulate the isolated word recognition system --- p.56 / Chapter 7.3.3 --- Hardware cost model --- p.57 / Chapter 7.3.4 --- Cost function --- p.58 / Chapter 7.3.5 --- Fraction size optimization --- p.59 / Chapter 7.3.6 --- One-dimensional optimization --- p.61 / Chapter 7.4 --- Summary --- p.63 / Chapter 8 --- Results --- p.64 / Chapter 8.1 --- Model training --- p.64 / Chapter 8.2 --- Simplex method optimization --- p.65 / Chapter 8.2.1 --- Simulation platform --- p.65 / Chapter 8.2.2 --- System level optimization --- p.66 / Chapter 8.2.3 --- LPC processor optimization --- p.67 / Chapter 8.2.4 --- One-dimensional optimization --- p.68 / Chapter 8.3 --- Speeding up the optimization convergence --- p.71 / Chapter 8.4 --- Optimization criteria --- p.73 / Chapter 8.5 --- Summary --- p.75 / Chapter 9 --- Conclusion --- p.76 / Chapter 9.1 --- Search space reduction --- p.76 / Chapter 9.2 --- Speeding up the searching --- p.77 / Chapter 9.3 --- Optimization criteria --- p.77 / Chapter 9.4 --- Flexibility of the framework design --- p.78 / Chapter 9.5 --- Further development --- p.78 / Bibliography --- p.80
90

On Timing-Based Localization in Cellular Radio Networks

Radnosrati, Kamiar January 2018 (has links)
The possibilities for positioning in cellular networks has increased over time, pushed by increased needs for location based products and services for a variety of purposes. It all started with rough position estimates based on timing measurements and sector information available in the global system for mobile communication (gsm), and today there is an increased standardization effort to provide more position relevant measurements in cellular communication systems to improve on localization accuracy and availability. A first purpose of this thesis is to survey recent efforts in the area and their potential for localization. The rest of the thesis then investigates three particular aspects, where the focus is on timing measurements. How can these be combined in the best way in long term evolution (lte), what is the potential for the new narrow-band communication links for localization, and can the timing measurement error be more accurately modeled? The first contribution concerns a narrow-band standard in lte intended for internet of things (iot) devices. This lte standard includes a special position reference signal sent synchronized by all base stations (bs) to all iot devices. Each device can then compute several pair-wise time differences that corresponds to hyperbolic functions. Using multilateration methods the intersection of a set of such hyperbolas can be computed. An extensive performance study using a professional simulation environment with realistic user models is presented, indicating that a decent position accuracy can be achieved despite the narrow bandwidth of the channel. The second contribution is a study of how downlink measurements in lte can be combined. Time of flight (tof) to the serving bs and time difference of arrival (tdoa) to the neighboring bs are used as measurements. From a geometrical perspective, the position estimation problem involves computing the intersection of a circle and hyperbolas, all with uncertain radii. We propose a fusion framework for both snapshot estimation and filtering, and evaluate with both simulated and experimental field test data. The results indicate that the position accuracy is better than 40 meters 95% of the time. A third study in the thesis analyzes the statistical distribution of timing measurement errors in lte systems. Three different machine learning methods are applied to the experimental data to fit Gaussian mixture distributions to the observed measurement errors. Since current positioning algorithms are mostly based on Gaussian distribution models, knowledge of a good model for the measurement errors can be used to improve the accuracy and robustness of the algorithms. The obtained results indicate that a single Gaussian distribution is not adequate to model the real toa measurement errors. One possible future study is to further develop standard algorithms with these models.

Page generated in 0.073 seconds