• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 67
  • 29
  • 11
  • 9
  • 8
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 165
  • 165
  • 165
  • 150
  • 30
  • 29
  • 28
  • 20
  • 18
  • 17
  • 15
  • 14
  • 13
  • 13
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

A 1.1V 25£gW Sigma-Delta modulator for voice applications

Yang, Shu-Ting 11 July 2005 (has links)
A low voltage low power sigma¡Vdelta modulator for voice applications is presented. The implementation of proposed sigma-delta modulator is based on switched-capacitor circuit. Bootstrapped switches were used to replace CMOS transmission gates for increasing the insufficient driving of switched-capacitor circuit under the low voltage operation. To reduce the power dissipation, an improved current mirror OTA were designed with rail-to-rail output swing, which can also make the voltage gain enhance 10~20 dB and overcome the poor voltage gain shortage of traditional current mirror OTA. The post-simulation result shows that the modulator achieves a dynamic range of 77 dB, a peak signal-to-noise ratio of 82 dB, and the sigma-delta modulator dissipates 25£gW under 1.1-V voltage supply, using TSMC 0.18£gm 1P6M CMOS technology.
22

Applications In Broadband Thz Spectroscopy Towards Material Studies

Turksen, Zeynep 01 January 2011 (has links) (PDF)
The purpose of this work was to construct and analyze a THz time domain spectroscopy (THz-TDS) system by using a nanojoule energy per pulse ultrafast laser (non-amplified ultrafast laser or oscillator) source and a non-linear optical generation method for THz generation. First a THz-TDS system, which uses photoconductive antenna (PCA) method for THz generation, was built to understand the working principles of these types of systems. This THz-TDS system which used PCA for generation and a 2mm thick &lt / 110&gt / ZnTe crystal for detection had a bandwidth up to 1 THz with a 1000:1 signal to noise ratio (S/N). Using this system, various materials were investigated to study the usefulness of the obtained bandwidth. Absorption coefficient and refractive indices of the sample materials were calculated. Results showed that the bandwidth of the system was not sufficient to obtain fingerprint properties of these materials. In order to improve the system, optical rectification method was used for THz generation. A different THz-TDS system was built with a 1mm thick &lt / 110&gt / ZnTe crystal used for the method of non-linear generation of THz radiation. Theoretical calculations of radiated intensity and electric field were done to analyze the expected bandwidth of the system. Results showed that the generation and the detection crystal thicknesses affect the obtained bandwidth of the system in that the bandwidth limiting factor is the crystal thickness and not the ultrafast laser pulse duration. Especially for detection, measurements obtained with both a 1mm thick and 2mm thick &lt / 110&gt / ZnTe crystal showed that there was not much difference in bandwidth as was predicted by theory. Also in order to increase the signal to noise ratio, the optics used in the system were optimized. It was found that by using same focal lengths for focusing and collimating optics around the generation crystal and by using a short focal length parabolic mirror, S/N could be improved. After these improvements this THz-TDS system which uses optical rectification for THz generation and electro-optic method for THz detection had a larger bandwidth up to 3 THz but with a lower 100:1 signal to noise ratio.
23

A Packet-Buffered Mobile IP with Fast Retransmission in Wireless LANs

Lyu, Sian-Bin 19 August 2003 (has links)
Today¡¦s mobile IP supports host mobility by dynamic changing IP addresses while the mobile host roaming in the Internet. However, There still exists performance problems during handoffs, such as packet loss, throughput degradation, and so on. In this Thesis, we propose a mechanism to reduce packet loss during handoff. The packet buffering mechanism at a home agent is initiated by mobile hosts when the signal-to-noise ratio of the wireless link falls below some predefined threshold. Once the handoff has completed, the home agent immediately delivers the first packet in the buffer to the mobile host. The home agent then clears the buffered packets already received by the mobile host through the returned ACK such that no further duplicate packets are sent out. In addition, we propose a route-selection policy to reduce end-to-end transmission delay by sending out probe packets along the paths. For the purpose of demonstration, we implement the mechanism on Linux platform. Through the measurements from the experiment, we have shown that the proposed mechanism can improve the throughput and solve the packet retransmission problem while handoffs.
24

A framework for the Analysis and Evaluation of Optical Imaging Systems with Arbitrary Response Functions

Wang, Zhipeng January 2008 (has links)
The scientific applications and engineering aspects of multispectral and hyperspectral imaging systems have been studied extensively. The traditional geometric spectral imaging system model is specifically developed aiming at spectral sensors with spectrally non-overlapping bands. Spectral imaging systems with overlapping bands also exist. For example, the quantum-dot infrared photodetectors (QDIPs) for midwave- and longwave-infrared (IR) imaging systems exhibit highly overlapping spectral responses tunable through the bias voltages applied. This makes it possible to build spectrally tunable imaging system in IR range based on single QDIP. Furthermore, the QDIP based system can be operated as being adaptive to scenes. Other optical imaging systems like the human eye and some polarimetric sensing systems also have overlapping bands. To analyze such sensors, a functional analysis-based framework is provided in this dissertation. The framework starts from the mathematical description of the interaction between sensor and the radiation from scene reaching it. A geometric model of the spectral imaging process is provided based on the framework. The spectral response functions and the scene spectra are considered as vectors inside an 1-dimensional spectral space. The spectral imaging process is abstracted to represent a projection of scene spectrum onto sensor. The projected spectrum, which is the least-square error reconstruction of the scene vectors, contains the useful information for image processing. Spectral sensors with arbitrary spectral response functions are can be analyzed with this model. The framework leads directly to an image pre-processing algorithm to remove the data correlation between bands. Further discussion shows that this model can also serve the purpose of sensor evaluation, and thus facilitates comparison between different sensors. The spectral shapes and the Signal-to-Noise Ratios (SNR) of different bands are seen to influence the sensor's imaging ability in different manners, which are discussed in detail. With the newly defined SNR in spectral space, we can quantitatively characterize the photodetector noise of a spectral sensor with overlapping bands. The idea of adaptive imaging with QDIP based sensor is proposed and illustrated.
25

Microfluidically Cryo-Cooled Planar Coils for Magnetic Resonance Imaging

Koo, Chiwan 16 December 2013 (has links)
High signal-to-noise ratio (SNR) is typically required for higher resolution and faster speed in magnetic resonance imaging (MRI). Planar microcoils as receiver probes in MRI systems offer the potential to be configured into array elements for fast imaging as well as to enable the imaging of extremely small objects. Microcoils, however, are thermal noise dominant and suffer limited SNR. Cryo-cooling for the microcoils can reduce the thermal noise, however conventional cryostats are not optimum for the microcoils because they typically use a thick vacuum gap to keep samples to be imaged to near room temperature during cryo-cooling. This vacuum gap is typically larger than the most sensitive region of the microcoils that defines the imaging depth, which is approximately the same as the diameters of the microcoils. Here microfluidic technology is utilized to locally cryo-cool the microcoils and minimize the thermal isolation gap so that the imaging surface is within the imaging depth of the microcoils. The first system consists of a planar microcoil with microfluidically cryo-cooling channels, a thin N2 gap and an imaging. The microcoil was locally cryo-cooled while maintaining the sample above 8°C. MR images using a 4.7 Tesla MRI system shows an average SNR enhancement of 1.47 fold. Second, the system has been further developed into a cryo-cooled microcoil system with inductive coupling to cryo-cool both the microcoil and the on-chip microfabricated resonating capacitor to further improve the Q improvement. Here inductive coupling was used to eliminate the physical connection between the microcoil and the tuning network so that a single cryocooling microfluidic channel could enclose both the microcoil and the capacitor with minimum loss in cooling capacity. Q improvement was 2.6 fold compared to a conventional microcoil with high-Q varactors and transmission line connection. Microfluidically tunable capacitors with the 653% tunability and Q of 1.3 fold higher compared to a conventional varactor have been developed and demonstrated as matching/tuning networks as a proof of concept. These developed microfluidically cryo-cooling system and tunable capacitors for improving SNR will potentially allow MR microcoils to have high-resolution images over small samples.
26

Characterizing low copy DNA signal using simulated and experimental data

Peters, Kelsey 13 July 2017 (has links)
Sir Alec Jeffreys was the first to describe human identification with deoxyribonucleic acid (DNA) in his seminal work in 1985 (1); the result was the birth of forensic DNA analysis. Since then, DNA has become the primary substance used to conduct human identification testing. Forensic DNA analysis has evolved since the work of Jeffreys and now incorporates the analysis of 15 to 24 STR (short tandem repeat) locations, or loci (2-4). The simultaneous amplification and subsequent electrophoresis of tens of STR polymorphisms results in analysis that are highly discriminating. DNA target masses of 0.5 to 2 nanograms (ng) are sufficient to obtain a full STR profile (4); however, pertinent information can still be obtained if low copy numbers of DNA are collected from the crime scene or evidentiary material (4-9). Despite the sensitivity of polymerase chain reaction (PCR) - capillary electrophoresis (CE) based technology, low copy DNA signal can be difficult to interpret due to the preponderance of low signal-to-noise ratios. Due to the complicated nature of low template signal, optimization of the DNA laboratory process such that high-fidelity signal is regularly produced is necessary; studies designed to effectively hone in on optimized laboratory conditions are presented herein. The STR regions of a set of samples containing 0.0078 ng of DNA were amplified for 29 cycles; the amplified fragments were separated using two types of CE platforms: an ABI 3130 Genetic Analyzer and an ABI 3500 Genetic Analyzer. The result is a genetic trace, or electropherogram (EPG), comprised of three signal components that include noise, artifact, and allele. The EPGs were analyzed using two peak detection software programs. In addition, a tool, termed Simulating Evidentiary Electropherograms (SEEIt) (10, 11), was utilized to simulate EPG signal obtained when one copy of DNA is processed through the forensic pipeline. SEEIt was parameterized to simulate data corresponding to two laboratory scenarios: the amplification of a single copy of DNA injected on an ABI 3130 Genetic Analyzer and on an ABI 3500 Genetic Analyzer. In total, 20,000 allele peaks and 20,000 noise peaks were generated for each CE platform. Comparison of simulated and experimental data was used to elucidate features that are difficult to ascertain by experimental work alone. The data demonstrate that experimental signal obtained with the ABI 3500 platform results in signal that is, on average, a factor of four larger than signal obtained from the ABI 3130 platform. When a histogram of the signal is plotted, a multi modal distribution is observed. The first mode is hypothesized to be the result of noise, while the second, third, etc. modes are the signal obtained when one, two, etc. target DNA molecules are amplified. By evaluating the data in this way, full signal resolution between noise and allelic signal is visualized. Therefore, this methodology may be used to: 1) optimize post-PCR laboratory conditions to obtain excellent resolution between noise and allelic signal; and 2) determine an analytical threshold (AT) that results in few false detections and few cases of allelic dropout. A χ2 test for independence of the experimental signal in noise positions and the experimental signal within allele positions < 12 relative fluorescence units (RFU), i.e. signal in the noise regime, indicate the populations are not independent when sufficient signal-to-noise resolution is obtained. Once sufficient resolution is achieved, optimized ATs may be acquired by evaluating and minimizing the false negative and false positive detection rates. Here, a false negative is defined as the non-detection of an allele and a false positive is defined as the detection of noise. An AT of 15 RFU was found to be the optimal AT for samples injected on the ABI 3130 for at least 10 seconds (sec) as 99.42% of noise peaks did not exceed this critical value while allelic dropout was kept to a minimum, 36.97%, at this AT. Similarily, in examining signal obtained from the ABI 3500, 99.41% and 99.0% of noise fell under an AT of 50 RFU for data analyzed with GeneMapper ID-X (GM) and OSIRIS (OS), respectively. Allelic dropout was 36.34% and 36.55% for GM and OS, respectively, at this AT.
27

Enhancement of target detection using software defined radar (SDR)

Youssef, Ahmed 11 December 2018 (has links)
Three novel approaches that are based on a recent communication technique called time compression overlap-add (TC-OLA), are introduced into pulse compression (PC) radar systems to improve the radar waveform shaping and enhance radar performance. The first approach lays down a powerful framework for combining the TC-OLA technique into traditional PC radar system. The new TC-OLA-based radar obtained is compared with other radars, namely traditional linear frequency modulation (LFM), and wideband LFM which has the same processing gain under different background situations. The results show the superiority of the proposed radar over the others. The second approach combines a random phase noise signal with a selected radar signal to build a new radar system, SSLFM radar, that enjoys the low-probability of intercept property, and, therefore, has higher immunity against noise jamming techniques compared with other radar systems. The properly recovery of the transmitted signal, however, requires a synchronization system at the receiver side. In this dissertation, we propose three synchronization systems each having different pros and cons. The last approach takes the radar waveform design methodology in a different direction and proposes a novel framework to combine any number of radar signal and transmit them simultaneously. Instead of trying to achieve universality through waveform shaping optimization, we do so via pluralism. As a proof of concept, all the proposed radars have been implemented and tested on software-defined radar (SDR). The theoretical and the experimental results showed the superiority of all proposed radar systems. Since TC-OLA is fundamental to this work, we add a chapter to propose a new technique called downsample upsample shift add (DUSA) to address the limitations of the existing implementation of TC-OLA. / Graduate
28

Controle de potência de transmissão em VANETS: uma abordagem utilizando teoria dos jogos

UCHÔA, Thiago Montenegro 28 August 2015 (has links)
Submitted by Fabio Sobreira Campos da Costa (fabio.sobreira@ufpe.br) on 2016-07-01T11:50:27Z No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Dissertação-Thiago Montenegro Uchôa.pdf: 2911250 bytes, checksum: 312f4263ccccb634993c9a52552ee408 (MD5) / Made available in DSpace on 2016-07-01T11:50:27Z (GMT). No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Dissertação-Thiago Montenegro Uchôa.pdf: 2911250 bytes, checksum: 312f4263ccccb634993c9a52552ee408 (MD5) Previous issue date: 2015-08-28 / As VANETs (Vehicular Ad hoc Networks) têm atraído bastante atenção por sua gama de aplicações e flexibilidade. Alguns exemplos são streaming de áudio e vídeo, tráfego de mensagens de emergência, alertas de colisão, dentre outras aplicações. Com as VANETs, os veículos e infraestruturas ao redor trocam informações de uma forma coordenada, orquestrando as rotas veiculares. Nas VANETs há uma troca de informações que vão além da interveicular, V2V (Vehicle to Vehicle). Estas formas de comunicação se dão entre veículo e infraestrutura e são denominadas V2I (Vehicle to Infraestructure). Com a concentração de veículos aumentando, há uma tendência de incremento na interferência interveicular, reduzindo assim os níveis de SINR e, consequentemente, diminuindo as taxas de transmissão entre os dispositivos. Algoritmos de controle de potência de transmissão foram propostos em diversas áreas. Um exemplo é a aplicação em cenários de comunicação celular, porém poucos trabalhos abordam o controle de potência de transmissão em uma aplicação de VANETs. Poucos trabalhos foram simulados em cenários de alta concentração. Este trabalho mostra resultados de simulações que analisam o efeito do controle de potência de transmissão em um cenário envolvendo de 250 a 2000 veículos. Com o objetivo de reduzir o impacto das altas potências de transmissão nas VANETs, é proposto um algoritmo, GRaPhiC, utilizando teoria dos jogos, visando reduzir a potência de transmissão dos dispositivos conectados à VANET. É então modelado um cenário utilizando teoria dos jogos não cooperativos. Este algoritmo fornece incentivo suficiente para que os nós desta rede não aumentem deliberadamente sua potência de transmissão. Para isto, é proposta uma função utilidade capaz de reduzir a potência de transmissão dos veículos, caso um conjunto de condições seja obedecida. Para validar os resultados deste trabalho, foram conduzidas simulações em diversos cenários. Para isto, foi utilizado o arcabouço do Veins. Os resultados demonstram que apesar de uma redução da potência de transmissão, que pode chegar a 24% da potência de transmissão inicial, a taxa de transmissão média não é afetada. Em um cenário de veículos elétricos, esta redução da potência de transmissão média se torna imprescindível. / VANETs (Vehicular Ad hoc Networks) have attracted much attention for its range of applications and flexibility. Some real examples are streaming video and audio, emergency messaging, traffic collision alerts, among other applications. With VANETs,vehicles and infrastructure around exchange information in a coordinated way, orchestrating vehicular routes. In the field of VANETs, there is an exchange of information that goes beyond intervehicular, V2V (Vehicle to Vehicle). These forms of communication are between vehicle and infrastructure and are called V2I (Vehicle to Infrastructure). With the increasing concentration of vehicles, there is an increasing trend in intervehicular interference, thereby reducing the SINR levels, and consequently decreasing the transmission rate between the devices. Transmit power control algorithms have been proposed in several areas. An example is the application in cellular communication scenarios, but few studies address the transmission power control in an application of VANETs. Few studies have simulated vehicles in high concentrated scenarios. This work shows results of simulations to analyze the effect of transmission power control in a scenario involving 250 of the 2000 carriers. In order to reduce the impact of high power transmission in VANETs, an algorithm, GRaPhiC, and proposed using game theory in order to reduce the transmission power of the devices connected to the VANET. It is then modeled a scenario using the theory of noncooperative games. This algorithm provides enough incentive for this network not deliberately increase their transmission power. For this, it is proposed a utility function capable of reducing the transmission power of vehicles if a set of conditions is complied with. To validate the results of this study, simulations were conducted in various scenarios. For this, we used the framework Veins. The results demonstrate that although a reduction of the transmission power, that can reach 24% of the initial transmission power, the average transmission rate is not affected. In a scenario of electric vehicles, this reduction in the average transmit power is indispensable.
29

Caracterização prosódica de sujeitos de diferentes variedades de fala do português brasileiro em diferentes relações sinal-ruído / Prosodic characterization of subjects from different Brazilian Portuguese varieties in different signal-to-noise ratio

Constantini, Ana Carolina, 1985- 05 August 2014 (has links)
Orientador: Plínio Almeida Barbosa / Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Estudos da Linguagem / Made available in DSpace on 2018-08-25T03:48:52Z (GMT). No. of bitstreams: 1 Constantini_AnaCarolina_D.pdf: 2193643 bytes, checksum: c28fc92dc576ce19800b7b9ebea2f365 (MD5) Previous issue date: 2014 / Resumo: A prosódia é uma informação fônica que está além do nível do segmento, e é usualmente estudada a partir da análise de três parâmetros fonético-acústicos clássicos: frequência fundamental, intensidade e duração. Embora estudada para muitas finalidades, a prosódia geralmente não é a primeira opção de investigação quando se busca conhecer mais sobre diferenças entre variedades de uma mesma língua, por exemplo. Desta forma, o presente trabalho pretende preencher essa lacuna no que diz respeito aos estudos prosódicos para caracterizar e diferenciar variedades faladas no Brasil. O objetivo desta tese de Doutorado foi estudar parâmetros prosódicos que pudessem caracterizar e posteriormente diferenciar sujeitos de diferentes variedades faladas do português brasileiro. Em um segundo momento, ruído aditivo foi incluído nas mesmas amostras de fala utilizadas para caracterizar a prosódia de diferentes variedades do português brasileiro, com o objetivo de entender melhor como os parâmetros prosódicos se comportam quando há inclusão de ruído nas amostras de fala, situação muito comum na área da Fonética Forense. O objetivo secundário da pesquisa foi aplicar testes perceptivos a ouvintes do português brasileiro com a finalidade de saber se eles seriam capazes de reconhecer e categorizar a origem dos falantes de acordo com suas falas. Analisamos amostras de fala espontânea de 35 sujeitos, do sexo masculino, de sete regiões brasileiras: São Paulo, Minas Gerais, Rio de Janeiro, Paraná, Distrito Federal, Região Nordeste e Região Norte. Todas as amostras de fala foram segmentadas em unidades Vogal-Vogal (unidade VV), unidades do tamanho da sílaba que compreendem um segmento que vai do início de uma vogal até o início da vogal imediatamente seguinte, incluindo as consoantes entre elas. O script BeatExtractor foi utilizado para este fim. Posteriormente, outro script (ProsodicDescriptorExtractor) foi executado para extrair oito medidas prosódico-acústicas das amostras de fala: taxa de elocução (unidades VV/s), média de z-score suavizado de duração de unidade VV, desvio-padrão de z-score suavizado de duração de unidade VV, assimetria de z-score suavizado de duração de unidade VV, taxa de proeminência (picos de z-score/s), mediana de frequência fundamental, ênfase espectral e taxa de unidades VV não proeminentes por segundo. Após a análise estatística, os resultados mostraram que cinco dos oito parâmetros conseguiram identificar ao menos uma variedade estudada e assim, diferenciá-la de outras. A mediana de F0 e a ênfase espectral foram capazes de criar dois grandes grupos que separaram DF e Região Norte de todas as outras variedades (exceto pela não diferenciação de DF e Paraná), mostrando que DF e Norte possuem valores maiores de ênfase espectral, bem como têm valores de F0 maiores que os falantes de outras variedades. Assimetria de z-score suavizado e taxa de unidades VV não proeminentes/s foram os parâmetros que colocaram DF e Norte em grupos diferentes. O desvio-padrão de z-score apontou para uma diferença entre dialetos falados na região Norte do Brasil e da Região Sul: a região Norte se diferenciou de SP, DF e Nordeste e SP, que, por sua vez, se diferenciou do PR. Concluímos, portanto, que os parâmetros prosódicos podem revelar características próprias de variedades faladas no Brasil. A análise das amostras de fala em diferentes relações sinal-ruído mostrou que mediana de F0 e ênfase espectral são os parâmetros que sofrem maior perturbação quando a relação sinal-ruído é baixa, sendo que os valores de ênfase espectral chegaram a sofrer mudanças de 154% em relação a seus valores originais. O resultado mostrou que a análise da estrutura rítmica é a mais robusta quando há presença de ruído nas amostras de fala. Por fim, os testes perceptivos foram aplicados em 20 falantes do português brasileiro e a variedade mais reconhecida foi a variedade falada no Rio de Janeiro, que chegou a apresentar 90% de acerto, seguida pela variedade falada no Nordeste do Brasil. Constatamos que a proximidade da região de origem dos ouvintes com a região da variedade presente no teste facilita a identificação correta da variedade / Abstract: Prosody is usually studied by means of three classic parameters: fundamental frequency, intensity and duration. As for as dialectology is concerned, prosody has not been the main focus of the research on different dialects. Our goal is to characterize and differentiate Brazilian Portuguese varieties using prosodic parameters. In order to do that, we analyzed the recordings of spontaneous speech from 35 male subjects from seven different Brazilian regions: São Paulo (SP), Minas Gerais (MG), Rio de Janeiro (RJ), Paraná (PR), Distrito Federal (DF), Northeast (NE) and North (N). The speech samples were segmented in Vowel-to-Vowel units (VV units) using the BeatExtractor script. Later, the ProsodicDescriptorExtractor script was used to extract eight prosodic measures which are: speech rate (VV units/s), mean, standard deviation and skewness of the normalized z-score, prominence rate (peaks of z-score/s), median of fundamental frequency, spectral emphasis and rate of non prominent VV units/s. The statistical analysis revealed that five prosodic parameters were able to identify at least one variety and then differentiate it from the others. Fundamental frequency median and spectral emphasis created two groups which separated N and DF (DF is located at West-Central region, near North region) from all the other varieties, considering that N and DF were characterized by high values of these two parameters. On the other hand, skewness of z-score and rate of non prominent VV units/s set DF and N in different groups. Standard deviation of z-score pointed to differences between North varieties and South varieties. We concluded that prosodic parameters can be useful to differentiate Brazilian Portuguese varieties. Another goal of the current study was to analyze the spontaneous speech 'recordings in distinct signal-to-noise ratios. The analysis has shown that Gaussian, additive noise modifies the values for median of F0 and spectral emphasis (the least has changed 154% related to the original values). The results revealed that the rhythmic organization of the speech chain is more indicated to the analysis of acoustic parameters in the presence of noise. Finally, 20 listeners were recruited to answer a perceptual test (free classification test) about the different varieties spoken in Brazil (we used the same spontaneous speech recordings to run the perceptual test). Rio de Janeiro was the most recognized variety, which presented 90% of correct answers, followed by the NE variety. The closeness of the listeners¿ own origin to the regions of the spoken varieties contributed to correct identifications / Doutorado / Linguistica / Doutora em Linguística
30

UTILIZAÇÃO DE VELOCÍMETRO ACÚSTICO DOPPLER (ADV) PARA AVALIAÇÃO DA CONCENTRAÇÃO DE SEDIMENTOS EM SUSPENSÃO / USING ACOUSTIC DOPPLER VELOCIMETER (ADV) FOR EVALUATING SUSPENDED SEDIMENTS CONCENTRATION

Cabral, Helenesio Carlos Borges 10 October 2014 (has links)
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / The main objective of this work was evaluating the possibility of using the Acoustic Doppler Velocimeter (ADV) to quantify the suspended sediment concentration (SSC). For this purpose, a laboratory experiment was developed, where a controlled environment was sought to test the ADV response for tests conducted in samples with known characteristics. Eight concentrations and five particles sizes of five types from soil types of three cities were used in the test. An experimental apparatus was built in order to obtain data in homogeneous water and soil samples utilizing the Sontek Horizon ADV software. Samples were collected in six different positions, 6cm, 12cm and 18cm from the bottom of the vessel, totalizing more than 600 sampling. The software Win ADV was utilized to visualize the data and the post processing, applying the PSTM filter to eliminate data that were affected by noise effects, maintaining a large percentage of data. The collected data showed an increase of SNR compared to the increase of SSC for the different soil types for the five particle sizes, wherein the best correlations between echo and SSC occurred for the tests with soils 2,3 and 4. By the measurements with ADV in laboratory, was possible to investigate the relation signal/noise for different particle sizes and SSC of soil when soil is suspended in water. It was noted that the found SNR values did not follow a pattern in relation to soil particle size. In other words, it cannot be concluded that the increase of particle size results in increase of SNR, whereas the SNR values found were larger for some particle sizes in some concentrations and smaller in others. In relation to SSC, it is easy to note that, in general, the higher the concentration of sediments, higher is the found SNR value. This result meets with what have been seen in the literature. / Este trabalho teve como objetivo principal avaliar a possibilidade do uso do Velocímetro Acústico Doppler (ADV) para quantificar a concentração de sedimento em suspensão (CSS). Para isso foi desenvolvido um experimento em laboratório, onde se buscou em um ambiente controlado, testar a resposta do ADV para testes conduzidos em amostras de características conhecidas. Foram utilizadas oito concentrações e cinco faixas granulométricas de cinco tipos de solo de três cidades nos ensaios. Foi construído um aparato experimental, a fim de obter dados em amostras homogêneas de água e solo utilizando o programa Sontek Horizon ADV. Coletou-se as amostras em diferentes posições a 6 cm, 12 cm e a 18 cm do fundo do recipiente, totalizando mais de 600 ensaios. Para visualização dos dados e pós processamento utilizou-se o programa Win ADV aplicando o filtro (PSTM) a fim de eliminar os dados que foram afetados por algum efeito de ruído, mantendo um grande percentual de dados. Os dados coletados mostraram o aumento do SNR em relação ao aumento da CSS para os diferentes tipos de solos para as cinco granulometrias, sendo que as melhores correlações entre o eco e a CSS ocorreram para os ensaios com os solos 2, 3 e 4. Através das medições com o ADV em laboratório, foi possível investigar a relação sinal/ruído para diferentes faixas granulométricas e CSS de solo quando em suspensão em água. Notou-se que os valores encontrados para o SNR não seguiram uma regra determinada em relação à faixa granulométrica do solo. Ou seja, não se pode concluir que o aumento da faixa granulométrica indica o aumento do SNR, já que os valores de SNR encontrados foram maiores para algumas granulometrias em algumas concentrações e menores em outras concentrações. Já no que diz respeito às diferentes CSS, em geral, quanto maior a concentração de sedimentos, maior é o valor de SNR encontrado. Esse resultado vem ao encontro com o que tem sido visto na literatura.

Page generated in 0.0475 seconds