• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 413
  • 119
  • 61
  • 44
  • 40
  • 35
  • 29
  • 27
  • 25
  • 16
  • 14
  • 11
  • 10
  • 9
  • 8
  • Tagged with
  • 968
  • 238
  • 126
  • 84
  • 73
  • 65
  • 63
  • 62
  • 60
  • 58
  • 58
  • 56
  • 55
  • 54
  • 53
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
261

Performance analysis of suboptimal soft decision DS/BPSK receivers in pulsed noise and CW jamming utilizing jammer state information

Juntti, J. (Juhani) 17 June 2004 (has links)
Abstract The problem of receiving direct sequence (DS) spread spectrum, binary phase shift keyed (BPSK) information in pulsed noise and continuous wave (CW) jamming is studied in additive white noise. An automatic gain control is not modelled. The general system theory of receiver analysis is first presented and previous literature is reviewed. The study treats the problem of decision making after matched filter or integrate and dump demodulation. The decision methods have a great effect on system performance with pulsed jamming. The following receivers are compared: hard, soft, quantized soft, signal level based erasure, and chip combiner receivers. The analysis is done using a channel parameter D, and bit error upper bound. Simulations were done in original papers using a convolutionally coded DS/BPSK system. The simulations confirm that analytical results are valid. Final conclusions are based on analytical results. The analysis is done using a Chernoff upper bound and a union bound. The analysis is presented with pulsed noise and CW jamming. The same kinds of methods can also be used to analyse other jamming signals. The receivers are compared under pulsed noise and CW jamming along with white gaussian noise. The results show that noise jamming is more harmful than CW jamming and that a jammer should use a high pulse duty factor. If the jammer cannot optimise a pulse duty factor, a good robust choice is to use continuous time jamming. The best performance was achieved by the use of the chip combiner receiver. Just slightly worse was the quantized soft and signal level based erasure receivers. The hard decision receiver was clearly worse. The soft decision receiver without jammer state information was shown to be the most vulnerable to pulsed jamming. The chip combiner receiver is 3 dB worse than an optimum receiver (the soft decision receiver with perfect channel state information). If a simple implementation is required, the hard decision receiver should be used. If moderate complex implementation is allowed, the quantized soft decision receiver should be used. The signal level based erasure receiver does not give any remarkable improvement, so that it is not worth using, because it is more complex to implement. If receiver complexity is not limiting factor, the chip combiner receiver should be used. Uncoded DS/BPSK systems are vulnerable to jamming and a channel coding is an essential part of antijam communication system. Detecting the jamming and erasing jammed symbols in a channel decoder can remove the effect of pulsed jamming. The realization of erasure receivers is rather easy using current integrated circuit technology.
262

Enhancing the performance of ad hoc networking by lower layer design

Prokkola, J. (Jarmo) 25 November 2008 (has links)
Abstract The research of early ad hoc-like networks, namely multi-hop packet radio networks, was mainly concentrated on lower layers (below network layer). Interestingly, the research of modern ad hoc networks has been mainly restricted to routing protocols. This is understandable, since routing is very challenging in such dynamic networks, but the drawback is that the lower layer models used in the studies are often highly simplified, giving inaccurate or even incorrect results. In addition, modern ad hoc network solutions are usually suboptimal because lower layers, not designed especially for ad hoc networking, are used. Thus, ad hoc networking performance, in general, can be notably enhanced by considering also the design of lower layers. The simple deployment and robustness make wireless ad hoc networks attractive for several applications (e.g., military, public authority, peer-to-peer civilian, and sensor networking), but the performance of the current solutions is typically not adequate. The focus of this work is on the effects of lower layer functionalities on the performance of ad hoc networks, while also taking into account the effects of upper layers (e.g., the effect of application traffic models). A CDMA (Code Division Multiple Access) based dual channel flat ad hoc network solution, incorporating cross-layering between all three lowest layers, is proposed and analyzed. The main element of this is the Bi-Code Channel Access (BCCA) method, in which a common code channel is used for broadcast information (e.g., route discovery), while a receiver-specific code channel is used for all directed transmissions. In addition, a new MAC (Medium Access Control) solution designed for BCCA is presented. Moreover, a novel network layer spreading code distribution (NSCD) method is presented. The advantage of these methods is that they are designed especially to be used in ad hoc networks. With an extensive set of case studies, it is shown that the presented methods outperform the typically used ad hoc network solutions (based on IEEE 802.11) in different kind of scenarios, environments, modeling concepts, and with different parameters. Detailed simulations are carried out in order to analyze the effects of different features at the lower layers, finding also interesting phenomena and dependencies between different layers. It is also shown that close attention should be paid to lower layer modeling even though the overall network performance would be in focus. In addition, various interesting features and behavior models regarding ad hoc networking are brought up. / Tiivistelmä Ensimmäiset tutkimukset rakenteettomista (ad hoc) verkoista esiintyivät nimellä monihyppypakettiradioverkot, ja ne koskivat pääasiassa verkkokerroksen alapuolella olevia tietoliikennekerroksia, mutta nykyiset tutkimukset ovat kuitenkin keskittyneet pääasiassa reititysprotokolliin. Tämä on sikäli ymmärrettävää, että reititys on hyvin haasteellista tämän tyyppisissä dynaamisissa verkoissa, mutta ongelma on, että käytetyt alempien kerrosten mallit ovat usein hyvinkin yksinkertaistettuja, mikä voi johtaa epätarkkoihin tai jopa vääriin tuloksiin. Tämän lisäksi nykyiset ehdotetut rakenteettomien verkkojen ratkaisut ovat usein tehottomia, sillä käytettyjä alempien kerrosten ratkaisuja ei ole tarkoitettu tällaisiin verkkoihin. Niinpä rakenteettomien verkkojen suorituskykyä voidaan parantaa huomattavasti kiinnittämällä huomiota alempien kerrosten suunnitteluun. Verkkojen rakenteettomuus on ajatuksena houkutteleva useissa käyttökohteissa (esimerkiksi sotilasympäristössä, viranomaiskäytössä, käyttäjien välisissä suorissa yhteyksissä ja sensoriverkoissa), mutta suorituskyky ei useinkaan ole riittävällä tasolla käytännön sovelluksiin. Työssä tutkitaan pääasiassa alempien kerrosten toiminnallisuuden vaikutusta rakenteettomien verkkojen suorituskykyyn ottaen huomioon myös ylemmät kerrokset, kuten sovellustason mallit. Työssä esitellään ja analysoidaan koodijakomonikäyttöön (CDMA, Code Division Multiple Access) perustuva kaksikanavainen tasaisen rakenteettoman verkon ratkaisu, jossa hyödynnetään kaikkien kolmen alimman kerroksen välistä keskinäistä viestintää. Ratkaisun ydin on BCCA-menetelmä (Bi-Code Channel Access), jossa käytetään kahta kanavaa tiedonsiirtoon. Yksi kanava on tarkoitettu kaikille yhteiseksi kontrollikanavaksi (esimerkiksi reitinmuodostus voi käyttää tätä kanavaa), kun taas toinen kanava on käyttäjäkohtainen kanava, jota käytetään suoraan viestittämiseen kyseiselle käyttäjälle (varsinainen data yms.). Tämän lisäksi esitellään myös BCCA-menetelmää varten suunniteltu kanavakontrollimenetelmä sekä verkkotasolla toimiva hajotuskooditiedon jakamiseen tarkoitettu menetelmä. Näiden uusien menetelmien etu on se, että ne on suunniteltu nimenomaan rakenteettomiin verkkoihin. Kattavan testivalikoiman avulla osoitetaan, että esitetty uusi ratkaisu peittoaa tyypilliset IEEE 802.11 -standardiin pohjautuvat rakenteettomien verkkojen ratkaisut. Testeissä käytetään erityyppisiä verkkorakenteita, ympäristöjä, mallinnusmenetelmiä ja parametreja. Yksityiskohtaisissa simuloinneissa ajetaan eri testitapauksia ja selvitetään, miten alempien kerrosten eri menetelmät missäkin tapauksessa vaikuttavat suorituskykyyn. Alempien kerrosten mallinnuksessa on syytä olla tarkkana, sillä työssä käy ilmi, että mallinnusvirheillä voi olla suurikin vaikutus myös ylempien kerrosten suorituskykyyn. Työ myös paljastaa useita mielenkiintoisia ilmiöitä ja vuorovaikutussuhteita, jotka liittyvät tutkittujen menetelmien ja yleisesti rakenteettomien verkkojen toimintaan.
263

DSSS Communication Link Employing Complex Spreading Sequences

Marx, Frans Engelbertius 24 January 2006 (has links)
The present explosion in digital communications and multi-user wireless cellular networks has urged a demand for more effective modulation methods, utilizing the available frequency spectrum more efficiently. To accommodate a large number of users sharing the same available frequency band, one requirement is the availability of large families of spreading sequences with excellent AC and CC properties. Another requirement is the availability of sets of orthogonal basis functions to extend capacity by exploiting all available degrees of freedom (e.g., temporal, frequency and spatial dimesions), or by employing orthogonal multi-code operation in parallel, such as used in the latest 3GPP and 3GPP2 Wide-band Code Division Multiple Access (WCDMA) modulation standards by employing sets of orthogonal Walsh codes to improve the overall data throughput capacity. The generic Direct Sequence Spread Spectrum (DSSS) transmitter developed in this dissertation has originally been designed and implemented to investigate the practicality and usefulness of complex spreading sequences, and secondly, to verify the concept of non-linearly interpolated root-of-unity (NLI-RU) filtering. It was found that both concepts have a large potential for application in point-to-point, and particularly micro-cellular Wireless Local Area Networks (WLANs) and Wireless-Local-Loop (WLL) environments. Since then, several novel concepts and subsystems have been added to the original system, some of which have been patented both locally and abroad, and are outlined below. Consequently, the ultimate goal of this research project was to apply the principles of the generic DSSS transmitter and receiver developed in this study in the implementation of a WLL radio-frequency (RF)-link, and particularly towards the establishment of affordable wireless multimedia services in rural areas. The extended coverage at exceptionally low power emission levels offered by the new design will be particularly useful in rural applications. The proposed WLL concept can for example also be utilized to add a unique mobility feature to for example existing Private Automatic Branch Exchanges (PABXs). The proposed system will in addition offer superior teletraffic capacity compared to existing micro-cellular technologies, e.g., the Digital European Cordless Telephony (DECT) system, which has been consider by Telkom for employment in rural areas. The latter is a rather outdated interim standard offering much lower spectral efficiency and capacity than competitive CDMA-solutions, such as the concept analyzed in this dissertation, which is based on the use of unique large families of spectrally well confined (i.e., band-limited) constant envelope (CE) complex spreading sequences (CSS) with superior correlation properties. The CE characteristic of the new spreading sequences furthermore facilitates the design of systems with superior power efficiency and exceptionally robust performance characteristics (much less spectral re-growth) compared to existing 2G and 3G modulation standards, in the presence of non-linear power amplification. This feature allows for a system with larger coverage for a given performance level and limited peak power, or alternatively, longer battery life for a given maximum communication distance and performance level, within a specified fixed spreading bandwidth. In addition, the possibility to extend the concept to orthogonal multi-code operation provides for comparable capacity to present 3G modulation standards, while still preserving superior power efficiency characteristics in non-linear power amplification. Conventional spread spectrum communication systems employ binary spreading sequences, such as Gold or Kasami sequences. The practical implementation of such a system is relatively simple. The design and implementation of a spread-spectrum communication system employing complex spreading sequences is however considerable more complex and has not been previously presented, nor been implemented in hardware. The design of appropriate code lock loops for CSS has led to a unique design with 3dB performance advantage compared to similar loops designed for binary spreading sequences. The theoretical analysis and simulation of such a system will be presented, with the primary focus on an efficient hardware implementation of all new concepts proposed, in the form of a WLL RF-link demonstrator. / Dissertation (MEng (Electronic Engineering))--University of Pretoria, 2007. / Electrical, Electronic and Computer Engineering / unrestricted
264

Propagation analysis of a 900 MHz spread spectrum centralized traffic signal control system.

Urban, Brian L. 05 1900 (has links)
The objective of this research is to investigate different propagation models to determine if specified models accurately predict received signal levels for short path 900 MHz spread spectrum radio systems. The City of Denton, Texas provided data and physical facilities used in the course of this study. The literature review indicates that propagation models have not been studied specifically for short path spread spectrum radio systems. This work should provide guidelines and be a useful example for planning and implementing such radio systems. The propagation model involves the following considerations: analysis of intervening terrain, path length, and fixed system gains and losses.
265

Comparative Bearing Capacity Analysis of Spread Footing Foundation on Fractured Granites

Nandi, Arpita 01 August 2011 (has links)
It is evident from several studies that ultimate bearing capacities calculated by traditional methods are conservative and subjective. For large civil structures founded on spread footings, cost-effective and safer foundation could be achieved by adopting optimum ultimate bearing capacity values that are based on an objective and pragmatic analysis. There is a pressing need to modify the existing methods for accurate estimation of the bearing capacities of rocks for spread footings. In practice, foundation bearing capacities of rock masses are often estimated using the presumptive values from Building Officials Code Administrators, National Building Code, and methods adopted by the American Association of State Highway and Transportation Officials. However, the estimated values are often not realistic, and site-specific analyses are essential. In this study, geotechnical reports and drill-log data from successful geotechnical design projects founded on a wide range of granites in eastern Tennessee were consulted. Different published methods were used to calculate ultimate bearing capacity of rock mass. These methods included Peck, Hansen and Thornburn, Hoek and Brown, Army Corps of Engineers, Naval Facilities Engineering Command, and Terzaghi's general bearing capacity equations. Wide variation was observed in the calculated ultimate bearing capacity values, which ranged over about two orders of magnitude. Only two of the methods provided realistic results when validated with plate-load test data from similar rocks.
266

Learning, Price Formation and the Early Season Bias in the NBA

Baryla, Edward A., Borghesi, Richard A., Dare, William H., Dennis, Steven A. 01 September 2007 (has links)
We test the NBA betting market for efficiency and find that totals lines are significantly biased early each season, yet sides lines do not show a similar bias. While market participants generally force line movements in the correct direction from open to close, they do not fully remove the identified bias in totals lines. This inefficiency enables a profitable technical trading strategy, as the resulting win rate of our proposed simple betting strategy against the closing totals line is 56.72%.
267

Numerical Study of Fire Spread Between Thin Parallel Samples in Microgravity

van den Akker, Enna Chia 23 May 2022 (has links)
No description available.
268

FÖRSKOLOR UNDER PANDEMISKT UTBROTT : Hygien- och smittskyddsarbete / Preschools during pandemic outbreak : Hygiene and infection protection work

Hamiroune, Sofiane January 2020 (has links)
Viral infections account about 80 % of all infections that preschool children suffer from. Many researches have shown that infectious diseases spread easily in environments where many individuals live in a limited space at the same time. Coronavirus disease 2019 (Covid19) is a new virus discovered in Wuhan city of China, that can cause serious respiratory problems and the infection can easily spread between people. The Swedish Public Health Authority has developed hygiene recommendations to reduce the risks of infection and limit the spread of the coronavirus by having good hygiene routines as well as social distance. The purpose of my study was to highlight hygiene and infection prevention work in preschools from different parts of Sweden. The study examined similarities and differences between the southern, the central and the northern part of the country regarding hygiene work in preschools during the pandemic. To answer the questions, a survey about hygiene work was sent to pedagogues at preschools in different municipalities. The results of the survey showed both similarities and differences between preschools in different parts of the country and indicated that the knowledge among pedagogues is high when it comes to general hygiene practices. The study showed deficiencies in the use of disinfectant by preschool children. Maintaining the same level of hygiene in preschools may have good effects and can reduce infections caused by pathogenic microorganisms other than coronavirus
269

Budget d’erreur en optique adaptative : Simulation numérique haute performance et modélisation dans la perspective des ELT / Adaptive optics error breakdown : high performance numerical simulation and modeling for future ELT

Moura Ferreira, Florian 11 October 2018 (has links)
D'ici quelques années, une nouvelle classe de télescopes verra le jour : celle des télescopes géants. Ceux-ci se caractériseront par un diamètre supérieur à 20m, et jusqu'à 39m pour le représentant européen, l'Extremely Large Telescope (ELT). Seulement, l'atmosphère terrestre vient dégrader sévèrement les images obtenues lors d'observations au sol : la résolution de ces télescopes est alors réduite à celle d'un télescope amateur de quelques dizaines de centimètres de diamètre.L'optique adaptative (OA) devient alors essentielle. Cette dernière permet de corriger en temps-réel les perturbations induites par l'atmosphère et de retrouver la résolution théorique du télescope. Néanmoins, les systèmes d'OA ne sont pas exempt de tout défaut, et une erreur résiduelle persiste sur le front d'onde (FO) et impacte la qualité des images obtenues. Cette dernière est dépendante de la Fonction d'Étalement de Point (FEP) de l'instrument utilisé, et la FEP d'un système d'OA dépend elle-même de l'erreur résiduelle de FO. L'identification et la compréhension des sources d'erreurs est alors primordiale. Dans la perspective de ces télescopes géants, le dimensionnement des systèmes d'OA nécessaires devient tel que ces derniers représentent un challenge technologique et technique. L'un des aspects à considérer est la complexité numérique de ces systèmes. Dès lors, les techniques de calcul de haute performance deviennent nécessaires, comme la parallélisation massive. Le General Purpose Graphical Processing Unit (GPGPU) permet d'utiliser un processeur graphique à cette fin, celui-ci possédant plusieurs milliers de coeurs de calcul utilisables, contre quelques dizaines pour un processeur classique.Dans ce contexte, cette thèse s'articule autour de trois parties. La première présente le développement de COMPASS, un outil de simulation haute performance bout-en-bout dédié à l'OA, notamment à l'échelle des ELT. Tirant pleinement parti des capacités de calcul des GPU, COMPASS permet alors de simuler une OA ELT en quelques minutes. La seconde partie fait état du développement de ROKET : un estimateur complet du budget d'erreur d'un système d'OA intégré à COMPASS, permettant ainsi d'étudier statistiquement les différentes sources d'erreurs et leurs éventuels liens. Enfin, des modèles analytiques des différentes sources d'erreur sont dérivés et permettent de proposer un algorithme d'estimation de la FEP. Les possibilités d'applications sur le ciel de cet algorithme sont également discutées. / In a few years, a new class of giants telescopes will appear. The diameter of those telescope will be larger than 20m, up to 39m for the european Extremely Large Telescope (ELT). However, images obtained from ground-based observations are severely impacted by the atmosphere. Then, the resolution of those giants telescopes is equivalent to the one obtained with an amateur telescope of a few tens of centimeters of diameter.Therefore, adaptive optics (AO) becomes essential as it aims to correct in real-time the disturbance due to the atmospherical turbulence and to retrieve the theoretical resolution of the telescope. Nevertheless, AO systems are not perfect: a wavefront residual error remains and still impacts the image quality. The latter is measured by the point spread function (PSF) of the system, and this PSF depends on the wavefront residual error. Hence, identifying and understanding the various contributors of the AO residual error is primordial.For those extremely large telescopes, the dimensioning of their AO systems is challenging. In particular, the numerical complexity impacts the numerical simulation tools useful for the AO design. High performance computing techniques are needed, as such relying on massive parallelization.General Purpose Graphical Processing Unit (GPGPU) enables the use of GPU for this purpose. This architecture is suitable for massive parallelization as it leverages GPU's several thousand of cores, instead of a few tens for classical CPU.In this context, this PhD thesis is composed of three parts. In the first one, it presents the development of COMPASS : a GPU-based high performance end-to-end simulation tool for AO systems that is suitable for ELT scale. The performance of the latter allows simulating AO systems for the ELT in a few minutes. In a second part, an error breakdown estimation tool, ROKET, is added to the end-to-end simulation in order to study the various contributors of the AO residual error. Finally, an analytical model is proposed for those error contributors, leading to a new way to estimate the PSF. Possible on-sky applications are also discussed.
270

Multiple Reference Active Noise Control

Tu, Yifeng 25 March 1997 (has links)
The major application of active noise control (ANC) has been focused on using a single reference signal; the work on multiple reference ANC is very scarce. Here, the behavior of multiple reference ANC is analyzed in both the frequency and time domain, and the coherence functions are provided to evaluate the effectiveness of multiple reference ANC. When there are multiple noise sources, multiple reference sensors are needed to generate complete reference signals. A simplified method combines those signals from multiple reference sensors into a single reference signal. Although this method could result in satisfactory noise control effects under special circumstances, the performance is generally compromised. A widely adopted method feeds each reference signal into a different control filter. This approach suffers from the problem of ill-conditioning when the reference signals are correlated. The problem of ill-conditioning results in slow convergence rate and high sensitivity to measurement error especially when the FXLMS algorithm is applied. To handle this particular problem, the decorrelated Filtered-X LMS (DFXLMS) algorithm is developed and studied in this thesis. Both simulations and experiments have been conducted to verify the DFXLMS algorithm and other issues associated with multiple reference ANC. The results presented herein are consistent with the theoretical analysis, and favorably indicate that the DFXLMS algorithm is effective in improving the convergence speed. To take the maximum advantage of the TMS320C30 DSP board used to implement the controller, several DSP programming issues are discussed, and assembly routines are given in the appendix. Furthermore, a graphical user interface (GUI) running under Windows' environment is introduced. The main purpose of the GUI is to facilitate parameters modification, real time data monitoring and DSP process control. / Master of Science

Page generated in 0.0414 seconds