• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • 1
  • 1
  • 1
  • Tagged with
  • 10
  • 10
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

RECONFIGURABLE PATCH ANTENNA FOR FREQUENCY DIVERSITY WITH HIGH FREQUENCY RATIO (1.6:1)

Jung, Chang won, Lee, Ming-jer, Liu, Sunan, Li, G. P., De Flaviis, Franco 10 1900 (has links)
ITC/USA 2005 Conference Proceedings / The Forty-First Annual International Telemetering Conference and Technical Exhibition / October 24-27, 2005 / Riviera Hotel & Convention Center, Las Vegas, Nevada / Reconfigurable patch antenna integrated with RF mircoelectromechanical system (MEMS) switches is presented in this paper. The proposed antenna radiates circularly polarized wave at selectable dual frequencies (4.7 GHz and 7.5GHz) of high frequency ratio (1.6:1). The switches are incorporated into the diagonally-fed square patch for controlling the operation frequency, and a rectangular stub attached to the edge of the patch acts as the perturbation to produce the circular polarization. Gain of proposed antenna is 5 - 6dBi, and axial ratio satisfies 3dB criterion at both operating frequencies. The switches are monolithically integrated on quartz substrate. The antenna can be used in applications requiring frequency diversity of remarkable high frequency ratio.
2

Design of Tunable Multi-Band Miniature Fractal Antennas on a SAW Substrate

Chi, Kuang-Ting 27 July 2011 (has links)
In this thesis, the study focuses on the tunable frequency ratio of Sierpinski Gasket fractal antennas and we use the SAW substrate of piezoelectric material. By using the fractal structure and the substrate of piezoelectric material, the goal of the miniaturized antenna is achieved. The proposed antenna can be widely used in wireless communication products. Firstly, we design the Sierpinski Gasket fractal antenna on the FR4 substrate. The asymmetric geometry of Sierpinski Gasket fractal structure is proposed and we choose the proper discontinuity locations to design the three-band and tunable antenna for IEEE 802.11b/g/a wireless communication systems. The preliminary design of the Sierpinski Gasket fractal structure on the piezoelectric substrate allows us to compare simulated and measured results to improve the non-ideal processing factors. Finally, comparing with the existing products, we reduce the size of the miniaturized fractal antenna to 5x5mm^2 on the SAW substrate by coplanar waveguide, coupled-fed, shorting with conductive adhesive and high iteration stage of half-Sierpinski Gasket fractal structure for GPS band and IEEE 802.11b/g applications.
3

La sensibilité auditive à l'harmonicité, en présence ou en l'absence de déficit cochléaire / Auditory sensitivity to harmonicity, in the presence or absence of cochlear deficit

Bonnard, Damien 19 May 2016 (has links)
Le système auditif fusionne en un seul percept des sons purs simultanés dont lesfréquences sont harmoniquement liées, même si ces sons purs sont parfaitement résoluspar la cochlée. Cette fusion harmonique contribue à l’analyse des scènes auditives nécessitantla ségrégation de sons complexes harmoniques simultanés. Deux des études décrites iciont eu pour but d'en préciser le mécanisme chez les auditeurs sains. La première a mesuréla discrimination de rapports de fréquences voisins de l’octave (2:1) pour des sons purssimultanés ou consécutifs. Les résultats montrent que l’octave simultanée est reconnuepar un mécanisme insensible à la direction des écarts par rapport à l’octave, alors que teln'est pas le cas pour l'octave séquentielle. Une deuxième étude a exploré cette différenceentre octave simultanée et octave séquentielle par des jugements subjectifs du degré defusion ou de l’affinité perceptive entre sons purs présentés simultanément ouséquentiellement. Les résultats indiquent que fusion harmonique et affinité de hauteurtonale ne sont pas des phénomènes directement liés. La troisième étude a porté sur ladétection d’un changement de rapport de fréquences entre sons purs simultanés chez desauditeurs sains ou porteurs d’une surdité cochléaire légère à moyenne. Cette étude révèleque la sensibilité à l’harmonicité est un phénomène robuste, parfois altéré chez les sujetssourds mais résistant à une forte dégradation de la discrimination fréquentielle. Il apparaîtcependant que la détection d’inharmonicité, asymétrique chez les auditeurs sains, devientsymétrique en présence de lésions cochléaires, ce qui semble indiquer que le mécanismeutilisé est différent dans ce cas. / The auditory system fuses into one percept simultaneous pure tones whose frequenciesare harmonically related, even if these pure tones are perfectly resolved by the cochlea.This harmonic fusion contributes to auditory scene analysis requiring segregation ofsimultaneous harmonic complex tones. Two studies reported here aimed to clarify itsmechanism in normal-hearing listeners. The first study measured the discrimination offrequency ratios close to the octave (2:1) for simultaneous or consecutive pure tones. Theresults show that the simultaneous octave is recognized by a mechanism which isinsensitive to the direction of deviations from the octave, whereas this is not the case forthe sequential octave. A second study explored this difference between the simultaneousand sequential octaves by means of subjective judgments on the degree of fusion or theperceptual affinity between pure tones presented simultaneously or sequentially. Theresults indicate that harmonic fusion and pitch affinity are not directly related phenomena.The third study measured the detection of a change in frequency ratio for simultaneouspure tones in listeners with normal hearing or mild to moderate cochlear hearing loss.The results indicate that sensitivity to harmonicity is a robust phenomenon, sometimesaltered by cochlear lesions but resistant to severe deficits in frequency discrimination.However, while inharmonicity detection is asymmetric in normal-hearing listeners, itbecomes symmetric in the presence of cochlear hearing loss, suggesting that theunderlying mechanism is different in the latter case.
4

Antennas on Floating Transceivers for Internet of Sea Applications

Liao, Hanguang 04 1900 (has links)
The extensive industrialization and human expansion has caused environmental protection wildlife conservation to become paramount concerns of the 21st century. The ecosystems of oceans and seas have particularly been affected due to activities like oil spills and increased fishing. This has led to a growing interest in monitoring of the oceans and marine animals to detect signs of distress in aquatic species. However, acquisition of data from oceans to land has been a challenging and expensive task. The concept of Internet of Sea provides a solution to this data transfer between the ocean nodes, like animal tags or deployed floating transceivers, and our land Internet, and can potentially eliminate the need of expensive monitoring ships or underwater cables. The Internet of Sea is system that comprises of sensor nodes in the form of detachable marine animal tags as the data acquisition platforms and distributed floating transceivers as the intermedium nodes which then transfer the data to the base-stations located on lands. The data acquired by animal tags are first to be stored in the tag, and once the tag comes to the sea surface, the data is transferred to the nearby floating transceivers. The floating transceivers have multi- hopping capability so the data can be passed to the land base-stations through a small number of transceivers. Due to the specific geometric shapes and size constraints of the tag and floating transceivers, as well as the harsh ocean environment, novel integrated antennas are required for this type of system. In this thesis, we propose several antenna designs suitable for Internet of Sea applications. The first design is a quasi-isotropic Antenna in Package (AiP), operating in the Bluetooth band, which has been designed for semi-real-time monitoring. Secondly, a large frequency-ratio dual- band microstrip antenna array, working at Extended Global System for Mobile communications (E-GSM900), Long Range (LoRa), and Bluetooth bands, has been designed for large-area wireless communication. Lastly, a circularly polarized microstrip antenna array has also been designed for Global Positioning System (GPS). Throughout the work, the measured results are consistent with the design strategies and simulation results.
5

Echo Delay Estimation to Aid Source Localization in Noisy Environments

Bettadapura, Raghuprasad Shivatejas 17 September 2014 (has links)
Time-delay estimation (TDE) finds application in a variety of problems, be it locating fractures or steering cameras towards the speaker in a multi-participant conference application. Underwater acoustic OFDM source localization is another important application of TDE. Existing underwater acoustic source localization techniques use a microphone array consisting of three or four sensors in order to effectively locate the source. Analog-to-digital (ADC) converters at these sensors call for a non-nominal investment in terms of circuitry and memory. A relatively inexpensive source localization algorithm is needed that works with the output of a single sensor. Since an inexpensive process for estimating the location of the source is desired, the ADC used at the sensor is capable only of a relatively low sampling rate. For a given delay, a low sampling rate leads to sub-sample interval delays, which the desired algorithm must be able to estimate. Prevailing TDE algorithms make some a priori assumptions about the nature of the received signal, such as Gaussianity, wide-sense stationarity, or periodicity. The desired algorithm must not be restrictive in so far as the nature of the transmitted signal is concerned. A time-delay estimation algorithm based on the time-frequency ratio of mixtures (TFRM) method is proposed. The experimental set-up consists of two microphones/sensors placed at some distances from the source. The method accepts as input the received signal which consists of the sum of the signal received at the nearer sensor and the signal received at the farther sensor and noise. The TFRM algorithm works in the time-frequency domain and seeks to perform successive source cancellation in the received burst. The key to performing source cancellation is to estimate the ratio in which the sources combine and this ratio is estimated by means of taking a windowed mean of the ratio of the spectrograms of any two pulses in the received burst. The variance of the mean function helps identify single-source regions and regions in which the sources mix. The performance of the TFRM algorithm is evaluated in the presence of noise and is compared against the Cramer-Rao lower bound. It is found that the variance of the estimates returned by the estimator diverge from the predictions of the Cramer-Rao inequality as the farther sensor is moved farther away. Conversely, the estimator becomes more reliable as the farther sensor is moved closer. The time-delay estimates obtained from the TFRM algorithm are used for source localization. The problem of finding the source reduces to finding the locus of points such that the difference of its distances to the two sensors equals the time delay. By moving the pair of sensors to a different location, or having a second time delay sensor, an exact location for the source can be determined by finding the point of intersection of the two loci. The TFRM method does not rely on a priori information about the signal. It is applicable to OFDM sources as well as sinusoidal and chirp sources. / Master of Science
6

Horloge à réseau optique de mercure : spectroscopie haute-résolution et comparaison d'étalons de fréquence ultra-précis / Mercury optical lattice clock : from high-resolution spectroscopy to frequency ratio measurements

Favier, Maxime 11 October 2017 (has links)
L’objet de cette thèse est le développement d’un standard de fréquence optique base sur l’atome de mercure 199Hg piégé dans un réseau optique. Je présenterai le dispositif expérimental et les améliorations apportées au cours de la thèse qui ont permis d’effectuer la spectroscopie de la transition doublement interdite 1S0 – 3P0 du mercure dans le domaine ultraviolet avec une résolution de l’ordre du Hz. Une telle résolution nous a permis de mener une étude approfondie des effets physiques affectant la fréquence de la transition d’horloge. Cette étude a permis un gain d’un facteur 60 sur la connaissance de la fréquence de la transition d’horloge, et de pousser l’incertitude au-delà de la réalisation de la seconde si par les étalons de fréquence basés sur le césium. Enfin je présenterai les résultats de plusieurs campagnes de comparaison entre notre étalon au mercure et d’autres horloges de très haute précision fonctionnant dans le domaine optique ainsi que dans le domaine micro-onde. / This thesis presents the development of a high-accuracy optical frequency standard based on neutral mercury 199Hg atoms trapped in an optical lattice.I will present the experimental setup and the improvements that were made during this thesis, which have allowed us to perform spectroscopy on the doubly forbidden 1S0 - 3P0 mercury clock transition with Hz level resolution. With such a resolution, we have been able to conduct an in-depth study of the physical effects affecting the clock transition. This study represents a factor 60 in accuracy on the knowledge of the clock transitions frequency, pushing the accuracy below the current realization of the si second by the best cesium atomic fountains. Finally, i will present the results of several comparison campaigns between the mercury clock and other state-of-the-art frequency standards, both in the optical and in the microwave domain.
7

Application of GIS-Based Knowledge-Driven and Data-Driven Methods for Debris-Slide Susceptibility Mapping

Das, Raja, Nandi, Arpita, Joyner, Andrew, Luffman, Ingrid 01 January 2021 (has links)
Debris-slides are fast-moving landslides that occur in the Appalachian region including the Great Smoky Mountains National Park (GRSM). Various knowledge and data-driven approaches using spatial distribution of the past slides and associated factors could be used to estimate the region’s debris-slide susceptibility. This study developed two debris-slide susceptibility models for GRSM using knowledge-driven and data-driven methods in GIS. Six debris-slide causing factors (slope curvature, elevation, soil texture, land cover, annual rainfall, and bedrock discontinuity), and 256 known debris-slide locations were used in the analysis. Knowledge-driven weighted overlay and data-driven bivariate frequency ratio analyses were performed. Both models are helpful; however, each come with a set of advantages and disadvantages regarding degree of complexity, time-dependency, and experience of the analyst. The susceptibility maps are useful to the planners, developers, and engineers for maintaining the park’s infrastructures and delineating zones for further detailed geotechnical investigation.
8

Landslide Susceptibility Analysis Using Open Geo-spatial Data and Frequency Ratio Technique / Jordskredkänslighetsanalys med hjälp av öppen geo-spatial data och frekvenskvotsteknik

YORULMAZ, TARIK EMRE January 2022 (has links)
Landslide susceptibility maps are useful for spatial decision-making to minimize the lossof lives and properties. There are many studies related to the development of landslidesusceptibility maps using various methods such as Analytic Hierarchy Process, Weight ofEvidence and Logistic Regression. Commonly, the geospatial data required for such analysis(such as land cover and soil type maps) are only locally available and pertinent to smallcase studies. Transferable and scalable approaches utilizing publicly available, large scaledatasets (ie., global or continental) are necessary to develop susceptibility maps in areaswhere local data is not available or when large-scale analysis is required. To develop suchapproaches, a systematic comparison between locally available, fine resolution, and largerscale, openly available but coarser resolution datasets is essential. The objective of this study isto investigate the efficiency of globally available public data for landslide susceptibility mappingby comparing it with the performance of the data provided from local institutions. For this purpose, the Göta river valley in Sweden and the country of Rwanda were selectedas study areas. Göta river valley was used for the comparison of local and open data.While Rwanda was used as a study area to ensure the efficiency of open data analysis andtransferability of the framework. The selected landslide impact factors for this study are;elevation, slope, soil type, land cover, precipitation, lithology, distance to roads, and distanceto drainage network. Landslide susceptibility maps were prepared by using the state-of-the-artFrequency Ratio method. The validation results using the prediction rate curve technique show92.9%, 90.2%, and 83.1% area under curve values for local and open data analyses of Göta rivervalley and open data analysis of Rwanda country, respectively. The results show that globallyavailable open data demonstrate strong potential for landslide susceptibility mapping whenhigh-resolution local data are not available.
9

Wildfire Hazard Mapping using GIS-MCDA and Frequency Ratio Models : A Case Study in Eight Counties of Norway

Zeleke, Walelegn Mengist January 2019 (has links)
Abstract A wildfire is an uncontrollable fire in an area of combustible fuel that occurs in the wild or countryside area. Wildfires are becoming a deadly and frequent event in Europe due to extreme weather conditions. In 2018, wildfires profoundly affected Sweden, Finland, and Norway, which were not big news before. In Norway, although there is well–organized fire detection, warning, and mitigation systems, mapping wildfire risk areas before the fire occurrence with georeferenced spatial information, are not yet well-practiced. At this moment, there are freely available remotely sensed spatial data and there is a good possibility that analysing wildfire hazard areas with geographical information systems together with multicriteria decision analysis (GIS–MCDA) and frequency ratio models in advance so that subsequent wildfire warning, mitigation, organizational and post resilience activities and preparations can be better planned.  This project covers eight counties of Norway: Oslo, Akershus, Østfold, Vestfold, Telemark, Buskerud, Oppland, and Hedmark. These are the counties with the highest wildfire frequency for the last ten years in Norway. In this study, GIS-MCDA integrated with analytic hierarchy process (AHP), and frequency ratio models (FR) were used with selected sixteen–factor criteria based on their relative importance to wildfire ignition, fuel load, and other related characteristics. The produced factor maps were grouped under four main clusters (K): land use (K1), climate (K2), socioeconomic (K3), and topography (K4) for further analysis. The final map was classified into no hazard, low, medium, and high hazard level rates. The comparison result showed that the frequency ratio model with MODIS satellite data had a prediction rate with 72% efficiency, followed by the same model with VIIRS data and 70% efficiency. The GIS-MCDA model result showed 67% efficiency with both MODIS and VIIRS data. Those results were interpreted in accordance with Yesilnacar’s classifications such as the frequency ratio model with MODIS data was considered a good predictor, whereas the GIS-MCDA model was an average predictor. When testing the model on the dependent data set, the frequency ratio model showed 72% with MODIS & VIIRS data, and the GIS-MCDA model showed 67% and 68% performance with MODIS and VIIRS data, respectively. In the hazard maps produced, the frequency ratio models for both MODIS and VIIRS showed that Hedmark and Akershus counties had the largest areas with the highest susceptibility to wildfires, while the GIS-MCDA method resulted to Østfold and Vestfold counties. Through this study, the best independent wildfire predictor criteria were selected from the highest to the lowest of importance; wildfire constraint and criteria maps were produced; wildfire hazard maps with high-resolution georeferenced data using three models were produced and compared; and the best, reliable, robust, and applicable model alternative was selected and recommended. Therefore, the aims and specific objectives of this study should be considered and fulfilled.
10

Evaluating the use of clock frequency ratio estimators in the playout from video distribution networks / Utvärdering av klockfrekvensratiosuppskattare i videoutspelning från ett distributionsnätverk

Myresten, Emil January 2023 (has links)
As traditional TV-broadcasters utilize the Internet to transport video streams, they often employ third party distribution networks to ensure that the Quality of Service of the packet stream remain high. In the last step of such a distribution network, a playout scheduler will schedule the packets so that their intervals are as close as possible to the intervals with which they were initially sent by the source. This is done with the aim to minimize the amount of packet delay variation experienced by the final destination. Due to the source and distribution network not always being synchronized to the same reference clock, reconstructing the packet intervals back into the initial values is subject to the issue of clock skew; the clocks run at different frequencies. In the presence of clock skew, each packet interval will be reconstructed with a slight error, which will accumulate throughout the packet stream. This thesis evaluates how clock frequency ratio estimators can be implemented as part of the playout scheduler, allowing it to better reconstruct the packet intervals in the face of clock skew. Two clock frequency ratio estimators presented in the literature are implemented as a part of playout schedulers, and their use in the context of a video distribution network is evaluated and compared to other playout schedulers. All in all, four of the considered playout schedulers employ clock frequency ratio estimation, and four do not. The playout schedulers are tested on a test bed consisting of two unsynchronized computers, physically separated into a source and a destination connected via Ethernet, to ensure the presence of clock skew. The source generates a video stream, which is sent to the destination. The destination is responsible for packet interval reconstruction and data collection, that allows for comparison of the eight playout schedulers. Each playout scheduler is evaluated under three different network scenarios, each network scenario with increasing amounts of packet delay variation added to the packet stream. The results show that the Cumulative Ratio Scaling with Warm-up scheduler, which employs a clock frequency ratio estimator based on accumulating inter-packet times, performs well under all three network scenarios. The behaviour of the playout scheduler is predictable and the frequency ratio estimate seems to converge towards the true clock frequency ratio as more packets arrive at the playout scheduler. While this playout scheduler is not perfect, its behaviour shows promise in being extended. / När traditionella TV-bolag sänder från avlägsna platser skickas ofta videoströmmen till huvudanläggningen via Internet. För att säkerställa att paketströmmen levereras till huvudanläggningen med hög kvalitet används ofta distributionsnätverk som tillhandahålls av en tredje part. Det sista steget i ett sådant distributionsnätverk utgörs av en utspelningsschemaläggare som schemalägger paketen så att de skickas ut med intervall så lika som möjligt de intervall paketen ursprungligen skickades med, en så kallad återkonstruktion av paketintervallen. Detta görs för att minimera mängden fördröjningsvariation som upplevs av den slutgiltiga destinationen. På grund av att källan och distributionsnätverket inte alltid är synkroniserade till samma referensklocka kommer återkonstruktionen av paketintervallen påverkas av klockskevning; klockorna i källan och det sista steget i distributionsnätverket går i olika takt. Klockskevningen innebär att varje paketintervall återskapas med ett litet fel – ett fel som ackumuleras över tid. Denna uppsats utvärderar hur klockfrekvensratiouppskattare kan användas i en utspelningsschemaläggare, och huruvida uppskattaren kan bidra till att bättre återkonstruera paketintervallen. Två uppskattare som presenterats i tidigare forskning implementeras i utspelningsschemaläggare, och dess användbarhet utvärderas och jämförs inom kontexten för videodistributionsnätverk. Fyra av de utvärderade utspelningsschemaläggarna använder sig av uppskattare och fyra gör det inte. Utspelningsschemaläggarna testas på en testbädd bestående av två osynkroniserade datorer, sammankopplade via Ethernet, för att säkerställa förekomsten av klockskevning. Källan skickar en videoström till destinationen, som i sig ansvarar för återkonstruktion av paketintervallen samt insamling av den data som möjliggör jämförelser mellan de åtta utspelningsschemaläggarna. Varje utspelningsschemaläggare testas under tre olika nätverksscenarion, där varje nätverksscenario utsätter paketströmmen för olika grader av fördröjningsvariation. Resultaten visar att en av utspelningsschemaläggarna, som använder en uppskattare där paketintervall ackumuleras över tid, presterar bra under alla tre nätverksscenarion. Schemaläggaren beter sig förutsägbart, och uppskattningen av klockfrekvensration verkar konvergera till den sanna klockfrekvensration i takt med att allt fler paket inkluderas i beräkningen. Utspelningsschemaläggaren är inte perfekt, men uppvisar lovande beteende för framtida förbättringar.

Page generated in 0.0617 seconds