• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4697
  • 1244
  • 771
  • 534
  • 339
  • 227
  • 115
  • 93
  • 84
  • 83
  • 70
  • 60
  • 39
  • 39
  • 39
  • Tagged with
  • 10085
  • 5198
  • 2161
  • 1483
  • 1458
  • 1449
  • 1342
  • 1181
  • 965
  • 952
  • 902
  • 859
  • 694
  • 673
  • 641
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

Study of a recursive method for matrix inversion via signal processing experiments

Ganjidoost, Mohammad January 2010 (has links)
Typescript, etc. / Digitized by Kansas Correctional Industries
162

Beyond ICA: advances on temporal BYY learning, state space modeling and blind signal processing. / CUHK electronic theses & dissertations collection

January 2000 (has links)
by Yiu-ming Cheung. / "July 2000." / Thesis (Ph.D.)--Chinese University of Hong Kong, 2000. / Includes bibliographical references (p. 98-106). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Mode of access: World Wide Web. / Abstracts in English and Chinese.
163

Reconstruction of multiple point sources by employing a modified Gerchberg-Saxton iterative algorithm

Habool Al-Shamery, Maitham January 2018 (has links)
Digital holograms has been developed an used in many applications. They are a technique by which a wavefront can be recorded and then reconstructed, often even in the absence of the original object. In this project, we use digital holography methods in which the original object amplitude and phase are recorded numerically, which would allow these data be downloaded to a spatial light modulator (SLM).This provides digital holography with capabilities that are not available using optical holographic methods. The digital holographically reconstructed image can be refocused to different depths depending on the reconstruction distance. This remarkable aspect of digital holography as can be useful in many applications and one of the most beneficial applications is when it is used for the biological cell studies. In this research, point source digital in-line and off-axis digital holography with a numerical reconstruction has been studied. The point source hologram can be used in many biological applications. As the original object we use the binary amplitude Fresnel zone plate which is made by rings with an alternating opaque and transparent transmittance. The in-line hologram of a spherical wave of wavelength, λ, emanating from the point source is initially employed in the project. Also, we subsequently employ an off-axis point source in which the original point-source object is translated away from original on-axis location. Firstly, we create the binary amplitude Fresnel zone plate (FZP) which is considered the hologram of the point source. We determine a phase-only digital hologram calculation technique for the single point source object. We have used a modified Gerchberg-Saxton algorithm (MGSA) instead of the non-iterative algorithm employed in classical analogue holography. The first complex amplitude distribution, i(x, y), is the result of the Fourier transform of the point source phase combined with a random phase. This complex filed distribution is the input of the iteration process. Secondly, we propagate this light field by using the Fourier transform method. Next we apply the first constraint by modifying the amplitude distribution, that is by replacing it with the measured modulus and keeping the phase distribution unchanged. We use the root mean square error (RMSE) criterion between the reconstructed field and the target field to control the iteration process. The RMSE decreases at each iteration, giving rise to an error-reduction in the reconstructed wavefront. We then extend this method to the reconstruction of multiple points sources. Thus the overall aim of this thesis has been to create an algorithm that is able to reconstruct the multi-point source objects from only their modulus. The method could then be used for biological microscopy applications in which it is necessary to determine the position of a fluorescing source from within a volume of biological tissue.
164

An optimization framework for fixed-point digital signal processing.

January 2003 (has links)
Lam Yuet Ming. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2003. / Includes bibliographical references (leaves 80-86). / Abstracts in English and Chinese. / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Motivation --- p.1 / Chapter 1.1.1 --- Difficulties of fixed-point design --- p.1 / Chapter 1.1.2 --- Why still fixed-point? --- p.2 / Chapter 1.1.3 --- Difficulties of converting floating-point to fixed-point --- p.2 / Chapter 1.1.4 --- Why wordlength optimization? --- p.3 / Chapter 1.2 --- Objectives --- p.3 / Chapter 1.3 --- Contributions --- p.3 / Chapter 1.4 --- Thesis Organization --- p.4 / Chapter 2 --- Review --- p.5 / Chapter 2.1 --- Introduction --- p.5 / Chapter 2.2 --- Simulation approach to address quantization issue --- p.6 / Chapter 2.3 --- Analytical approach to address quantization issue --- p.8 / Chapter 2.4 --- Implementation of speech systems --- p.9 / Chapter 2.5 --- Discussion --- p.10 / Chapter 2.6 --- Summary --- p.11 / Chapter 3 --- Fixed-point arithmetic background --- p.12 / Chapter 3.1 --- Introduction --- p.12 / Chapter 3.2 --- Fixed-point representation --- p.12 / Chapter 3.3 --- Fixed-point addition/subtraction --- p.14 / Chapter 3.4 --- Fixed-point multiplication --- p.16 / Chapter 3.5 --- Fixed-point division --- p.18 / Chapter 3.6 --- Summary --- p.20 / Chapter 4 --- Fixed-point class implementation --- p.21 / Chapter 4.1 --- Introduction --- p.21 / Chapter 4.2 --- Fixed-point simulation using overloading --- p.21 / Chapter 4.3 --- Fixed-point class implementation --- p.24 / Chapter 4.3.1 --- Fixed-point object declaration --- p.24 / Chapter 4.3.2 --- Overload the operators --- p.25 / Chapter 4.3.3 --- Arithmetic operations --- p.26 / Chapter 4.3.4 --- Automatic monitoring of dynamic range --- p.27 / Chapter 4.3.5 --- Automatic calculation of quantization error --- p.27 / Chapter 4.3.6 --- Array supporting --- p.28 / Chapter 4.3.7 --- Cosine calculation --- p.28 / Chapter 4.4 --- Summary --- p.29 / Chapter 5 --- Speech recognition background --- p.30 / Chapter 5.1 --- Introduction --- p.30 / Chapter 5.2 --- Isolated word recognition system overview --- p.30 / Chapter 5.3 --- Linear predictive coding processor --- p.32 / Chapter 5.3.1 --- The LPC model --- p.32 / Chapter 5.3.2 --- The LPC processor --- p.33 / Chapter 5.4 --- Vector quantization --- p.36 / Chapter 5.5 --- Hidden Markov model --- p.38 / Chapter 5.6 --- Summary --- p.40 / Chapter 6 --- Optimization --- p.41 / Chapter 6.1 --- Introduction --- p.41 / Chapter 6.2 --- Simplex Method --- p.41 / Chapter 6.2.1 --- Initialization --- p.42 / Chapter 6.2.2 --- Reflection --- p.42 / Chapter 6.2.3 --- Expansion --- p.44 / Chapter 6.2.4 --- Contraction --- p.44 / Chapter 6.2.5 --- Stop --- p.45 / Chapter 6.3 --- One-dimensional optimization approach --- p.45 / Chapter 6.3.1 --- One-dimensional optimization approach --- p.46 / Chapter 6.3.2 --- Search space reduction --- p.47 / Chapter 6.3.3 --- Speeding up convergence --- p.48 / Chapter 6.4 --- Summary --- p.50 / Chapter 7 --- Word Recognition System Design Methodology --- p.51 / Chapter 7.1 --- Introduction --- p.51 / Chapter 7.2 --- Framework design --- p.51 / Chapter 7.2.1 --- Fixed-point class --- p.52 / Chapter 7.2.2 --- Fixed-point application --- p.53 / Chapter 7.2.3 --- Optimizer --- p.53 / Chapter 7.3 --- Speech system implementation --- p.54 / Chapter 7.3.1 --- Model training --- p.54 / Chapter 7.3.2 --- Simulate the isolated word recognition system --- p.56 / Chapter 7.3.3 --- Hardware cost model --- p.57 / Chapter 7.3.4 --- Cost function --- p.58 / Chapter 7.3.5 --- Fraction size optimization --- p.59 / Chapter 7.3.6 --- One-dimensional optimization --- p.61 / Chapter 7.4 --- Summary --- p.63 / Chapter 8 --- Results --- p.64 / Chapter 8.1 --- Model training --- p.64 / Chapter 8.2 --- Simplex method optimization --- p.65 / Chapter 8.2.1 --- Simulation platform --- p.65 / Chapter 8.2.2 --- System level optimization --- p.66 / Chapter 8.2.3 --- LPC processor optimization --- p.67 / Chapter 8.2.4 --- One-dimensional optimization --- p.68 / Chapter 8.3 --- Speeding up the optimization convergence --- p.71 / Chapter 8.4 --- Optimization criteria --- p.73 / Chapter 8.5 --- Summary --- p.75 / Chapter 9 --- Conclusion --- p.76 / Chapter 9.1 --- Search space reduction --- p.76 / Chapter 9.2 --- Speeding up the searching --- p.77 / Chapter 9.3 --- Optimization criteria --- p.77 / Chapter 9.4 --- Flexibility of the framework design --- p.78 / Chapter 9.5 --- Further development --- p.78 / Bibliography --- p.80
165

On Timing-Based Localization in Cellular Radio Networks

Radnosrati, Kamiar January 2018 (has links)
The possibilities for positioning in cellular networks has increased over time, pushed by increased needs for location based products and services for a variety of purposes. It all started with rough position estimates based on timing measurements and sector information available in the global system for mobile communication (gsm), and today there is an increased standardization effort to provide more position relevant measurements in cellular communication systems to improve on localization accuracy and availability. A first purpose of this thesis is to survey recent efforts in the area and their potential for localization. The rest of the thesis then investigates three particular aspects, where the focus is on timing measurements. How can these be combined in the best way in long term evolution (lte), what is the potential for the new narrow-band communication links for localization, and can the timing measurement error be more accurately modeled? The first contribution concerns a narrow-band standard in lte intended for internet of things (iot) devices. This lte standard includes a special position reference signal sent synchronized by all base stations (bs) to all iot devices. Each device can then compute several pair-wise time differences that corresponds to hyperbolic functions. Using multilateration methods the intersection of a set of such hyperbolas can be computed. An extensive performance study using a professional simulation environment with realistic user models is presented, indicating that a decent position accuracy can be achieved despite the narrow bandwidth of the channel. The second contribution is a study of how downlink measurements in lte can be combined. Time of flight (tof) to the serving bs and time difference of arrival (tdoa) to the neighboring bs are used as measurements. From a geometrical perspective, the position estimation problem involves computing the intersection of a circle and hyperbolas, all with uncertain radii. We propose a fusion framework for both snapshot estimation and filtering, and evaluate with both simulated and experimental field test data. The results indicate that the position accuracy is better than 40 meters 95% of the time. A third study in the thesis analyzes the statistical distribution of timing measurement errors in lte systems. Three different machine learning methods are applied to the experimental data to fit Gaussian mixture distributions to the observed measurement errors. Since current positioning algorithms are mostly based on Gaussian distribution models, knowledge of a good model for the measurement errors can be used to improve the accuracy and robustness of the algorithms. The obtained results indicate that a single Gaussian distribution is not adequate to model the real toa measurement errors. One possible future study is to further develop standard algorithms with these models.
166

Frequency Tracking for Speed Estimation

Lindfors, Martin January 2018 (has links)
Estimating the frequency of a periodic signal, or tracking the time-varying frequency of an almost periodic signal, is an important problem that is well studied in literature. This thesis focuses on two subproblems where contributions can be made to the existing theory: frequency tracking methods and measurements containing outliers. Maximum-likelihood-based frequency estimation methods are studied, focusing on methods which can handle outliers in the measurements. Katkovnik’s frequency estimation method is generalized to real and harmonic signals, and a new method based on expectation-maximization is proposed. The methods are compared in a simulation study in which the measurements contain outliers. The proposed methods are compared with the standard periodogram method. Recursive Bayesian methods for frequency tracking are studied, focusing on the Rao-Blackwellized point mass filter (RBPMF). Two reformulations of the RBPMF aiming to reduce computational costs are proposed. Furthermore, the technique of variational approximate Rao-Blackwellization is proposed, which allows usage of a Student’s t distributed measurement noise model. This enables recursive frequency tracking methods to handle outliers using heavy-tailed noise models in Rao-Blackwellized filters such as the RBPMF. A simulation study illustrates the performance of the methods when outliers occur in the measurement noise. The framework above is applied to and studied in detail in two applications. The first application is on frequency tracking of engine sound. Microphone measurements are used to track the frequency of Doppler-shifted variants of the engine sound of a vehicle moving through an area. These estimates can be used to compute the speed of the vehicle. Periodogram-based methods and the RBPMF are evaluated on simulated and experimental data. The results indicate that the RBPMF has lower rmse than periodogram-based methods when tracking fast changes in the frequency. The second application relates to frequency tracking of wheel vibrations, where a car has been equipped with an accelerometer. The accelerometer measurements are used to track the frequency of the wheel axle vibrations, which relates to the wheel rotational speed. The velocity of the vehicle can then be estimated without any other sensors and without requiring integration of the accelerometer measurements. In situations with high signal-to-noise ratio (SNR), the methods perform well. To remedy situations when the methods perform poorly, an accelerometer input is introduced to the formulation. This input is used to predict changes in the frequency for short time intervals. / Periodiska signaler förekommer ofta i praktiken. I många tillämpningar är det intressant att försöka skatta frekvensen av dessa periodiska signaler, eller vibrationer, genom mätningar av dem. Detta kallas för frekvensskattning eller frekvensföljning beroende på om frekvensen är konstant eller varierar över tid. Två tillämpningar studeras i denna licentiatavhandling. Målet i båda tillämpningarna är att skatta hastigheten på fordon. Den första tillämpningen handlar om att följa frekvensen av ett fordons motorljud, när fordonet kör genom ett område där mikrofoner har blivit utplacerade. Man kan skatta ett fordons hastighet från motorljudet, vars frekvens beror på Dopplereffekten. Denna avhandling undersöker förbättrad följning av denna frekvens, vilket förbättrar skattningen av hastigheten. Två olika sätt för frekvensföljning används. Ett sätt är att anta att frekvensen är konstant inom korta tidsintervall och räkna ut en skattning av frekvensen. Ett annat sätt är att använda en matematisk modell som tar hänsyn till att frekvensen varierar över tid, och försöka följa den. För detta syfte föreslås det Rao-Blackwelliserade punktmassefiltret. Det är en metod som utnyttjar strukturen i den matematiska modellen av problemet för att erhålla bra prestanda och lägre krav på beräkningskraft. Resultaten visar att den föreslagna metoden förbättrar träffsäkerheten på frekvensföljningen i vissa fall, vilket kan förbättra prestanda för hastighetsskattningen. Den andra tillämpningen handlar om att skatta ett fordons hastighet med enbart en accelerometer (mätare av acceleration) fastsatt i chassit. Hjulvibrationer kan mätas av denna accelerometer. Frekvenserna av dessa vibrationer ges av hjulaxelns rotationshastighet. Om hjulradien är känd eller skattad så kan man räkna ut fordonets hastighet, så att man inte behöver använda externa mätningar som gps eller hjulhastighetsmätningar. Accelerationsmätningarna är brusiga och innehåller outliers, vilka är mätvärden som ibland slumpmässigt kraftigt skiljer sig från det förväntade. Därför studeras metoder som är konstruerade för att hantera dessa. Det föreslås en approximation till Rao-Blackwellisering för att kunna hantera dessa outliers. Det föreslås också en ny frekvensskattningsmetod baserad på expectation-maximization, vilket är ytterligare en metod som utnyttjar strukturer i matematiska modeller. En simuleringsstudie visar att metoderna har lägre genomsnittligt skattningsfel än standardmetoder. På insamlad experimentell data visas att metoderna ofta fungerar, men att de behöver kompletteras med en ytterligare komponent för död räkning (prognosvärden) med accelerometer för att öka antalet testfall där de erhåller godtagbar prestanda.
167

Forward error correction as equalization method

Molin, Jakob January 2019 (has links)
The instant demand to achieve high data rate in communication systems is driving the high-speed links into multi Gigabit per second data transitions, where its suffering from inter symbol interference due to bandwidth limitation. Equalizers are used at both the transmitter and receiver side of the link to counteract signal attenuation, reflections, crosstalk and any type of distortion of the signal. 2-level pulse amplitude modulation is today the most commonly used signal modulator. To achieve higher data rates, but remaining the same bandwidth, higher order pulse amplitude modulation must be used. The disadvantage is that the signal-to-noise ratio gets worse, which increases the bit error rate. Forward error correction is a method to reduce the bit error rate over a noisy or unreliable channel. This master thesis is about investigating forward error correction as an equalization method, to compensate for the increased bit error rate when using higher order signal modulation. Reed Solomon forward error corrector was implemented, which has its strength in correcting burst of errors. Two different testbenches were used to create the same errors that appears in a real channel. Probability plots were used to investigate how the Reed Solomon could compensate at low bit error rate regions. The probability plots showed that the Reed Solomon (544,514) would be able to Reduce the bit error rate from down to . The same Reed Solomon was used in the channel simulations, where the output bit error rate was correlating to the probability plots.
168

Punishment and human signal detection

Lie, Celia, n/a January 2007 (has links)
Detection and choice research have largely focused on the effects of relative reinforcer frequencies or magnitudes. The effects of punishment have received much less attention. This thesis investigated the effects of punishment on human signal-detection performance using a number of different procedures. These included punisher frequency and magnitude variations, different types of punishers (point loss & time-outs), variations in stimulus disparity, and different detection tasks (judgments of stimulus arrays containing either more blue or red objects, or judgments of statements that were either true or false). It examined whether punishers have similar, but opposite, effects to reinforcers on detection performance, and whether the effects of punishment were successfully captured by existing models of punishment and choice. Experiment 1 varied the relative frequency or magnitude of time-out punishers for errors using the blue/red task. Participants were systematically biased away from the response alternative associated with the higher rate or magnitude of time-out punishers in two of three procedures. Experiment 2 varied the relative frequency of point-loss punishers using the blue/red task and the true/false task. Participants were systematically biased away from the alternative associated with the higher rate of point-loss punishers for the true/false task. Experiment 3 examined the effects of punishment on response bias from a psychophysical perspective. Previous detection research which varied stimulus discriminability while holding reinforcers ratios constant and unequal (Johnstone & Alsop, 2000; McCarthy & Davison, 1984) found that a criterion location measure (e.g., c, Green & Swets, 1966) was a better descriptor of isobias functions compared to a likelihood ratio measure (e.g., log β[G], Green & Swets, 1966). Experiment 3 varied stimulus discriminability while holding punisher ratios constant and unequal. Like previous research, isobias functions were consistent with a criterion location measure. Experiments 4, 5, 6, and 7 examined contemporary models of choice and punishment. Experiments 4, 5, and 6 varied the relative reinforcer ratio in detection tasks, both with and without the inclusion of an equal rate of punishment. Experiment 7 held the reinforcer ratio constant and unequal, and varied the durations of time-out punishers. Increases in preference (for the richer alternative) from reinforcer-only conditions to reinforcer + punisher conditions would support a subtractive model of punishment, while decreases in preference would support an additive model of punishment. Experiment 4 was a between-groups study using time-out punishers. It supported the predictions of an additive model. Experiment 5 used three different procedures in a preliminary within-subjects design, evaluating which procedure was best suited for a larger within-subjects experiment (Experiment 6). In Experiment 6, participants sat four reinforcer-only and four reinforcer + punisher conditions where reinforcers were point-gains and punishers were point-losses. The results from Experiment 6 were mixed - some participants showed increased preference while others showed little change or a slight decrease. This appeared related to the order in which participants received the reinforcer-only and reinforcer + punisher conditions. Experiment 7 also found no consistent change in preference with increases in time-out durations. Instead, there was a slow increase in bias on the richer alternative across the eight sessions. Overall, punishers had similar, but opposite, effects to reinforcers in detection procedures (Experiments 1, 2, & 3). These effects were successfully captured by Davison and Tustin�s (1978) model of detection. The later experiments did not provide support for a subtractive model punishment model of choice, which had provided the best descriptor in corresponding concurrent-schedule research. Instead, Experiment 4 supported an additive model, and Experiments 5, 6, and 7 provided no evidence for either model - limitations and implications of these studies are discussed. However, the present thesis illustrates that the signal detection procedure is promising for studying the combined effects of reinforcement and punishment, and may offer a worthwhile complement to standard concurrent-schedule choice procedures.
169

Uniform concentric circular and spherical arrays with frequency invariant characteristics theory, design and applications /

Chen, Haihua. January 2006 (has links)
Thesis (Ph. D.)--University of Hong Kong, 2007. / Title proper from title frame. Also available in printed format.
170

Use of elicitor sets to characterize cellular signal transduction networks

Narayanan, Arthi 26 September 2003 (has links)
Intracellular signaling cascades can no longer be viewed as linear pathways that relay and amplify information. Often, components of different pathways interact, resulting in signaling networks. The interactions of different pathways and the dynamic modulation of the activities of the components within signaling pathways can create a multitude of biological outputs. The cell appears to use these pathways as a way of integrating multiple inputs to shape a uniquely defined output. These outputs allow the cell to respond to and adapt to an ever-changing environment. Understanding how biological systems receive, process and respond to complex data inputs has important implications for the design and utilization of sensors for a variety of applications, including toxicology, pharmacology, medical diagnostics, and environmental monitoring. This study uses the elicitor sets method, which is an experimental framework designed to monitor information flows through signal transduction pathways. The elicitor set approach has been used to derive mechanistic interpretations from the action of Phenylmethylsulfonyl Fluoride (PMSF), a serine protease inhibitor and nerve agent analog. The elicitor panel comprises of signal transduction network effectors namely forskolin, clonidine, cirazoline and H89, each of which targets the signaling pathway at known specific points. The elicitor set experiments enable compartmentalization of the cAMP signaling pathway, examining the role played by each segment and identifying possible cross-talk mechanisms. Our experiments substantiate that selection of adenyl cyclase as the reference node and 10 [mu]M forskolin as the primary elicitor, segments the upper portion of the G-Protein Coupled Receptor (GPCR) pathway associated with the G[sub q] and G[sub i] proteins. Application of the secondary elicitors, 100 nM clonidine (a2-adrenergic receptor agonist), 1 pM and 100 pM cirazoline (al-adrenergic receptor agonists), and 1 [mu]M and 100 [mu]M H-89 (PKA inhibitor) fortifies the decoupling, as the system is unresponsive to clonidine and cirazoline in the presence of forskolin, while continuing to respond to H-89. Exposure of the cells to 1 mM PMSF subsequent to forskolin addition restricted the quantifiable impact of PMSF to regions of the signaling pathways below adenyl cyclase. Triggering the system by use of secondary elicitors augmented the information resolution which is reinforced by the increased sensitivity of cells to 100 [mu]M H-89 that acts at an important checkpoint below adenyl cyclase. / Graduation date: 2004

Page generated in 0.0627 seconds