Spelling suggestions: "subject:"waveform""
1 |
Analysis and interpretation of full waveform sonic dataAstbury, S. January 1985 (has links)
No description available.
|
2 |
Improving the determination of moment tensors, moment magnitudes and focal depths of earthquakes below Mw 4.0 using regional broadband seismic data:Dahal, Nawa January 2019 (has links)
Thesis advisor: Michael J. Naughton / Thesis advisor: John E. Ebel / Determining accurate source parameters of small magnitude earthquakes is important to understand the source physics and tectonic processes that activate a seismic source as well as to make more accurate estimates of the probabilities of the recurrences of large earthquakes based on the statistics of smaller earthquakes. The accurate determination of the focal depths and focal mechanisms of small earthquakes is required to constrain the potential seismic source zones of future large earthquakes, whereas the accurate determination of seismic moment is required to calculate the sizes (best represented by moment magnitudes) of earthquakes. The precise determination of focal depths, moment magnitudes and focal mechanisms of small earthquakes can help greatly advance our knowledge of the potentially active faults in an area and thus help to produce accurate seismic hazard and risk maps for that area. Focal depths, moment magnitudes and focal mechanisms of earthquakes with magnitudes Mw 4.0 and less recorded by a sparse seismic network are usually poorly constrained due to the lack of an appropriate method applicable to find these parameters with a sparse set of observations. This dissertation presents a new method that can accurately determine focal depths, moment magnitudes and focal mechanisms of earthquakes with magnitudes between Mw 4.0 and Mw 2.5 using the broadband seismic waveforms recorded by the local and regional seismic stations. For the determination of the focal depths and the moment magnitudes, the observed seismograms as well as synthetic seismograms are filtered through a bandpass filter of 1-3 Hz, whereas for the determination of the focal mechanisms, they are filtered through a bandpass filter of 1.5-2.5 Hz. Both of these frequency passbands have a good signal-to-noise ratio (SNR) for the small earthquakes of the magnitudes that are analyzed in this dissertation. The waveforms are processed to their envelopes in order to make the waveforms relatively simple for the modeling. A grid search is performed over all possible dip, rake and strike angles and as well as over possible depths and scalar moments to find the optimal value of the focal depth and the optimal value of the scalar moment. To find the optimal focal mechanism, a non-linear moment-tensor inversion is performed in addition to the coarse grid search over the possible dip, rake and strike angles at a fixed value of focal depth and a fixed value of scalar moment. The method of this dissertation is tested on 18 aftershocks of Mw between 3.70 and 2.60 of the 2011 Mineral, Virginia Mw 5.7 earthquake. The method is also tested on 5 aftershocks of Mw between 3.62 and 2.63 of the 2013 Ladysmith, Quebec Mw 4.5 earthquake. Reliable focal depths and moment magnitudes are obtained for all of these events using waveforms from as few as 1 seismic station within the epicentral distance of 68-424 km with SNR greater or equal to 5. Similarly, reliable focal mechanisms are obtained for all of the events with Mw 3.70-3.04 using waveforms from at least 3 seismic stations within the epicentral distance of 60-350 km each with SNR greater or equal to 10. Tests show that the moment magnitudes and focal depths are not very sensitive to the crustal model used, although systematic variations in the focal depths are observed with the total crustal thickness. Tests also show that the focal mechanisms obtained with the different crustal structures vary with the Kagan angle of 30o on average for the events and the crustal structures tested. This means that the event moment magnitudes and event focal mechanism determinations are only somewhat sensitive to the uncertainties in the crustal models tested. The method is applied to some aftershocks of the Mw 7.8, 2015 Gorkha, Nepal earthquake which shows that the method developed in this dissertation, by analyzing data from eastern North America, appears to give good results when applied in a very different tectonic environment in a different part of the world. This study confirms that the method of modeling envelopes of seismic waveforms developed in this dissertation can be used to extract accurate focal depths and moment magnitudes of earthquakes with Mw 3.70-2.60 using broadband seismic data recorded by local and regional seismic stations at epicentral distances of 68-424 km and accurate focal mechanisms of earthquakes with Mw 3.70-3.04 using broadband seismic data recorded by local and regional seismic stations at epicentral distances of 60-350 km. / Thesis (PhD) — Boston College, 2019. / Submitted to: Boston College. Graduate School of Arts and Sciences. / Discipline: Physics.
|
3 |
Digital Walsh-Fourier Analyser for Periodic WaveformsSiemens, Karl-Hans 05 1900 (has links)
<p> This thesis describes a proposed design of a special-purpose digital instrument that will obtain the first 32 coefficients of the Walsh-Fourier series of a low-fundamental frequency periodic voltage. The mathematics are developed for applying Walsh functions to obtain a Walsh-Fourier series in the same manner as sinusoidal waves are used to obtain a Fourier series of a periodic wave. It is shown how Walsh-Fourier coefficients are employed to obtain a Fourier series. Some familiar waveforms are shown as examples. The mathematical concepts are applied to the design of the instrument, of which two major portions have been constructed using integrated circuits. The Walsh-Fourier coefficients are available at the end of the second cycle of the input. The upper fundamental frequency limit of the instrument is approximately 60 Hz. There is no low-frequency limit.</p> / Thesis / Master of Engineering (MEngr)
|
4 |
An Optimal Design Method for MRI Teardrop Gradient WaveformsRen, Tingting 08 1900 (has links)
<p> This thesis presents an optimal design method for MRI (Magnetic Resonance Imaging) teardrop gradient waveforms in two and three dimensions. Teardrop in two dimensions was introduced at ISMRM 2001 by Anand et al. to address the need for a high efficiency balanced k-space trajectory for real-time cardiac SSFP (Steady State Free Precession) imaging.</p> <p> We have modeled 2D and 3D teardrop gradient waveform design as nonlinear convex optimization problems with a variety of constraints including global constraints (e.g., moment nulling for motion insensitivity). Commercial optimization solvers can solve the models efficiently. The implementation of AMPL models and numerical testing results with the solver MOSEK are provided. This optimal design procedure produces physically realizable teardrop
waveforms which enable real-time cardiac imaging with equipment otherwise incapable of doing it, and optimally achieves the maximum resolution and motion artifact reduction goals. The research may encompass other waveform design problems in MRI and has built a good foundation for further research in this area.</p> / Thesis / Master of Science (MSc)
|
5 |
PROPOSED NEW WAVEFORM CONCEPT FOR BANDWIDTH AND POWER EFFICIENT TT&COlsen, Donald P. 10 1900 (has links)
International Telemetering Conference Proceedings / October 25-28, 1999 / Riviera Hotel and Convention Center, Las Vegas, Nevada / Most traditional approaches to TT&C have employed waveforms that are neither very power nor bandwidth efficient. A new approach to TT&C waveforms greatly improves these efficiencies. Binary Gaussian Minimum Shift Keying (GMSK) provides a constant envelope bandwidth efficient signal for applications above about 10 Kbps. The constant envelope preserves the spectrum through saturated amplifiers. It provides the best power efficiency when used with turbo coding. For protection against various kinds of burst errors it includes the hybrid interleaving for memory and delay efficiency and packet compatible operations in Time Division Multiple Access (TDMA) environments. Commanding, telemetry, mission data transmission, and tracking are multiplexed in TDMA format.
|
6 |
REDUCED COMPLEXITY TRELLIS DETECTION OF SOQPSK-TGNelson, Tom 10 1900 (has links)
ITC/USA 2006 Conference Proceedings / The Forty-Second Annual International Telemetering Conference and Technical Exhibition / October 23-26, 2006 / Town and Country Resort & Convention Center, San Diego, California / The optimum detector for shaped offset QPSK (SOQPSK) is a trellis detector which has high complexity (as measured by the number of detection filters and trellis states) due to the memory inherent in this modulation. In this paper we exploit the cross-correlated, trellis-coded, quadrature modulation (XTCQM) representation of SOQPSK-TG to formulate a reduced complexity detector. We show that a factor of 128 reduction in the number of trellis states of the detector can be achieved with a loss of only 0.2 dB in bit error rate performance as compared to optimum at P(b) = 10^(-5).
|
7 |
FPGA-based Implementation of Concatenative Speech Synthesis AlgorithmBamini, Praveen Kumar 29 October 2003 (has links)
The main aim of a text-to-speech synthesis system is to convert ordinary text into an acoustic signal that is indistinguishable from human speech. This thesis presents an architecture to implement a concatenative speech synthesis algorithm targeted to FPGAs. Many current text-to-speech systems are based on the concatenation of acoustic units of recorded speech. Current concatenative speech synthesizers are capable of producing highly intelligible speech. However, the quality of speech often suffers from discontinuities between the acoustic units, due to contextual differences. This is the easiest method to produce synthetic speech. It concatenates prerecorded acoustic elements and forms a continuous speech element. The software implementation of the algorithm is performed in C whereas the hardware implementation is done in structural VHDL. A database of acoustic elements is formed first with recording sounds for different phones. The architecture is designed to concatenate acoustic elements corresponding to the phones that form the target word. Target word corresponds to the word that has to be synthesized. This architecture doesn't address the form discontinuities between the acoustic elements as its ultimate goal is the synthesis of speech. The Hardware implementation is verified on a Virtex (v800hq240-4) FPGA device.
|
8 |
Recognition of phonemes using shapes of speech waveforms in WALCarandang, Alfonso B., n/a January 1994 (has links)
Generating a phonetic transcription of the speech waveform is one method
which can be applied to continuous speech recognition. Current methods of labelling a
speech wave involve the use of techniques based on spectrographic analysis. This paper
presents a computationally simple method by which some phonemes can be identified
primarily by their shapes.
Three shapes which are regularly manifested by three phonemes were examined
in utterances made by a number of speakers. Features were then devised to recognise
their patterns using finite state automata combined with a checking mechanism. These
were implemented in the Wave Analysis Language (WAL) system developed at the
University of Canberra and the results showed that the phonemes can be recognised
with high accuracy. The resulting shape features have also demonstrated a degree of
speaker independence and context dependency.
|
9 |
FPGA-based implementation of concatenative speech synthesis algorithm [electronic resource] / by Praveen Kumar Bamini.Bamini, Praveen Kumar. January 2003 (has links)
Title from PDF of title page. / Document formatted into pages; contains 68 pages. / Thesis (M.S.Cp.E.)--University of South Florida, 2003. / Includes bibliographical references. / Text (Electronic thesis) in PDF format. / ABSTRACT: The main aim of a text-to-speech synthesis system is to convert ordinary text into an acoustic signal that is indistinguishable from human speech. This thesis presents an architecture to implement a concatenative speech synthesis algorithm targeted to FPGAs. Many current text-to-speech systems are based on the concatenation of acoustic units of recorded speech. Current concatenative speech synthesizers are capable of producing highly intelligible speech. However, the quality of speech often suffers from discontinuities between the acoustic units, due to contextual differences. This is the easiest method to produce synthetic speech. It concatenates prerecorded acoustic elements and forms a continuous speech element. The software implementation of the algorithm is performed in C whereas the hardware implementation is done in structural VHDL. A database of acoustic elements is formed first with recording sounds for different phones. / ABSTRACT: The architecture is designed to concatenate acoustic elements corresponding to the phones that form the target word. Target word corresponds to the word that has to be synthesized. This architecture doesn't address the form discontinuities between the acoustic elements as its ultimate goal is the synthesis of speech. The Hardware implementation is verified on a Virtex (v800hq240-4) FPGA device. / System requirements: World Wide Web browser and PDF reader. / Mode of access: World Wide Web.
|
10 |
Enhancement of the Signal-to-Noise Ratio in Sonic Logging Waveforms by Seismic InterferometryAldawood, Ali 04 1900 (has links)
Sonic logs are essential tools for reliably identifying interval velocities which, in
turn, are used in many seismic processes. One problem that arises, while logging, is
irregularities due to washout zones along the borehole surfaces that scatters the transmitted energy and hence weakens the signal recorded at the receivers. To alleviate
this problem, I have extended the theory of super-virtual refraction interferometry to
enhance the signal-to-noise ratio (SNR) sonic waveforms. Tests on synthetic and real
data show noticeable signal-to-noise ratio (SNR) enhancements of refracted P-wave
arrivals in the sonic waveforms.
The theory of super-virtual interferometric stacking is composed of two redatuming steps followed by a stacking procedure. The first redatuming procedure is of
correlation type, where traces are correlated together to get virtual traces with the
sources datumed to the refractor. The second datuming step is of convolution type,
where traces are convolved together to dedatum the sources back to their original
positions. The stacking procedure following each step enhances the signal to noise
ratio of the refracted P-wave first arrivals.
Datuming with correlation and convolution of traces introduces severe artifacts
denoted as correlation artifacts in super-virtual data. To overcome this problem, I replace the datuming with correlation step by datuming with deconvolution. Although
the former datuming method is more robust, the latter one reduces the artifacts
significantly. Moreover, deconvolution can be a noise amplifier which is why a regularization term is utilized, rendering the datuming with deconvolution more stable.
Tests of datuming with deconvolution instead of correlation with synthetic and real
data examples show significant reduction of these artifacts. This is especially true
when compared with the conventional way of applying the super-virtual refraction
interferometry method.
|
Page generated in 0.0485 seconds