• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 409
  • 119
  • 61
  • 44
  • 40
  • 35
  • 29
  • 27
  • 25
  • 16
  • 14
  • 11
  • 10
  • 9
  • 8
  • Tagged with
  • 963
  • 238
  • 126
  • 83
  • 72
  • 65
  • 63
  • 62
  • 60
  • 58
  • 58
  • 56
  • 55
  • 54
  • 53
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
261

Performance analysis of suboptimal soft decision DS/BPSK receivers in pulsed noise and CW jamming utilizing jammer state information

Juntti, J. (Juhani) 17 June 2004 (has links)
Abstract The problem of receiving direct sequence (DS) spread spectrum, binary phase shift keyed (BPSK) information in pulsed noise and continuous wave (CW) jamming is studied in additive white noise. An automatic gain control is not modelled. The general system theory of receiver analysis is first presented and previous literature is reviewed. The study treats the problem of decision making after matched filter or integrate and dump demodulation. The decision methods have a great effect on system performance with pulsed jamming. The following receivers are compared: hard, soft, quantized soft, signal level based erasure, and chip combiner receivers. The analysis is done using a channel parameter D, and bit error upper bound. Simulations were done in original papers using a convolutionally coded DS/BPSK system. The simulations confirm that analytical results are valid. Final conclusions are based on analytical results. The analysis is done using a Chernoff upper bound and a union bound. The analysis is presented with pulsed noise and CW jamming. The same kinds of methods can also be used to analyse other jamming signals. The receivers are compared under pulsed noise and CW jamming along with white gaussian noise. The results show that noise jamming is more harmful than CW jamming and that a jammer should use a high pulse duty factor. If the jammer cannot optimise a pulse duty factor, a good robust choice is to use continuous time jamming. The best performance was achieved by the use of the chip combiner receiver. Just slightly worse was the quantized soft and signal level based erasure receivers. The hard decision receiver was clearly worse. The soft decision receiver without jammer state information was shown to be the most vulnerable to pulsed jamming. The chip combiner receiver is 3 dB worse than an optimum receiver (the soft decision receiver with perfect channel state information). If a simple implementation is required, the hard decision receiver should be used. If moderate complex implementation is allowed, the quantized soft decision receiver should be used. The signal level based erasure receiver does not give any remarkable improvement, so that it is not worth using, because it is more complex to implement. If receiver complexity is not limiting factor, the chip combiner receiver should be used. Uncoded DS/BPSK systems are vulnerable to jamming and a channel coding is an essential part of antijam communication system. Detecting the jamming and erasing jammed symbols in a channel decoder can remove the effect of pulsed jamming. The realization of erasure receivers is rather easy using current integrated circuit technology.
262

Enhancing the performance of ad hoc networking by lower layer design

Prokkola, J. (Jarmo) 25 November 2008 (has links)
Abstract The research of early ad hoc-like networks, namely multi-hop packet radio networks, was mainly concentrated on lower layers (below network layer). Interestingly, the research of modern ad hoc networks has been mainly restricted to routing protocols. This is understandable, since routing is very challenging in such dynamic networks, but the drawback is that the lower layer models used in the studies are often highly simplified, giving inaccurate or even incorrect results. In addition, modern ad hoc network solutions are usually suboptimal because lower layers, not designed especially for ad hoc networking, are used. Thus, ad hoc networking performance, in general, can be notably enhanced by considering also the design of lower layers. The simple deployment and robustness make wireless ad hoc networks attractive for several applications (e.g., military, public authority, peer-to-peer civilian, and sensor networking), but the performance of the current solutions is typically not adequate. The focus of this work is on the effects of lower layer functionalities on the performance of ad hoc networks, while also taking into account the effects of upper layers (e.g., the effect of application traffic models). A CDMA (Code Division Multiple Access) based dual channel flat ad hoc network solution, incorporating cross-layering between all three lowest layers, is proposed and analyzed. The main element of this is the Bi-Code Channel Access (BCCA) method, in which a common code channel is used for broadcast information (e.g., route discovery), while a receiver-specific code channel is used for all directed transmissions. In addition, a new MAC (Medium Access Control) solution designed for BCCA is presented. Moreover, a novel network layer spreading code distribution (NSCD) method is presented. The advantage of these methods is that they are designed especially to be used in ad hoc networks. With an extensive set of case studies, it is shown that the presented methods outperform the typically used ad hoc network solutions (based on IEEE 802.11) in different kind of scenarios, environments, modeling concepts, and with different parameters. Detailed simulations are carried out in order to analyze the effects of different features at the lower layers, finding also interesting phenomena and dependencies between different layers. It is also shown that close attention should be paid to lower layer modeling even though the overall network performance would be in focus. In addition, various interesting features and behavior models regarding ad hoc networking are brought up. / Tiivistelmä Ensimmäiset tutkimukset rakenteettomista (ad hoc) verkoista esiintyivät nimellä monihyppypakettiradioverkot, ja ne koskivat pääasiassa verkkokerroksen alapuolella olevia tietoliikennekerroksia, mutta nykyiset tutkimukset ovat kuitenkin keskittyneet pääasiassa reititysprotokolliin. Tämä on sikäli ymmärrettävää, että reititys on hyvin haasteellista tämän tyyppisissä dynaamisissa verkoissa, mutta ongelma on, että käytetyt alempien kerrosten mallit ovat usein hyvinkin yksinkertaistettuja, mikä voi johtaa epätarkkoihin tai jopa vääriin tuloksiin. Tämän lisäksi nykyiset ehdotetut rakenteettomien verkkojen ratkaisut ovat usein tehottomia, sillä käytettyjä alempien kerrosten ratkaisuja ei ole tarkoitettu tällaisiin verkkoihin. Niinpä rakenteettomien verkkojen suorituskykyä voidaan parantaa huomattavasti kiinnittämällä huomiota alempien kerrosten suunnitteluun. Verkkojen rakenteettomuus on ajatuksena houkutteleva useissa käyttökohteissa (esimerkiksi sotilasympäristössä, viranomaiskäytössä, käyttäjien välisissä suorissa yhteyksissä ja sensoriverkoissa), mutta suorituskyky ei useinkaan ole riittävällä tasolla käytännön sovelluksiin. Työssä tutkitaan pääasiassa alempien kerrosten toiminnallisuuden vaikutusta rakenteettomien verkkojen suorituskykyyn ottaen huomioon myös ylemmät kerrokset, kuten sovellustason mallit. Työssä esitellään ja analysoidaan koodijakomonikäyttöön (CDMA, Code Division Multiple Access) perustuva kaksikanavainen tasaisen rakenteettoman verkon ratkaisu, jossa hyödynnetään kaikkien kolmen alimman kerroksen välistä keskinäistä viestintää. Ratkaisun ydin on BCCA-menetelmä (Bi-Code Channel Access), jossa käytetään kahta kanavaa tiedonsiirtoon. Yksi kanava on tarkoitettu kaikille yhteiseksi kontrollikanavaksi (esimerkiksi reitinmuodostus voi käyttää tätä kanavaa), kun taas toinen kanava on käyttäjäkohtainen kanava, jota käytetään suoraan viestittämiseen kyseiselle käyttäjälle (varsinainen data yms.). Tämän lisäksi esitellään myös BCCA-menetelmää varten suunniteltu kanavakontrollimenetelmä sekä verkkotasolla toimiva hajotuskooditiedon jakamiseen tarkoitettu menetelmä. Näiden uusien menetelmien etu on se, että ne on suunniteltu nimenomaan rakenteettomiin verkkoihin. Kattavan testivalikoiman avulla osoitetaan, että esitetty uusi ratkaisu peittoaa tyypilliset IEEE 802.11 -standardiin pohjautuvat rakenteettomien verkkojen ratkaisut. Testeissä käytetään erityyppisiä verkkorakenteita, ympäristöjä, mallinnusmenetelmiä ja parametreja. Yksityiskohtaisissa simuloinneissa ajetaan eri testitapauksia ja selvitetään, miten alempien kerrosten eri menetelmät missäkin tapauksessa vaikuttavat suorituskykyyn. Alempien kerrosten mallinnuksessa on syytä olla tarkkana, sillä työssä käy ilmi, että mallinnusvirheillä voi olla suurikin vaikutus myös ylempien kerrosten suorituskykyyn. Työ myös paljastaa useita mielenkiintoisia ilmiöitä ja vuorovaikutussuhteita, jotka liittyvät tutkittujen menetelmien ja yleisesti rakenteettomien verkkojen toimintaan.
263

DSSS Communication Link Employing Complex Spreading Sequences

Marx, Frans Engelbertius 24 January 2006 (has links)
The present explosion in digital communications and multi-user wireless cellular networks has urged a demand for more effective modulation methods, utilizing the available frequency spectrum more efficiently. To accommodate a large number of users sharing the same available frequency band, one requirement is the availability of large families of spreading sequences with excellent AC and CC properties. Another requirement is the availability of sets of orthogonal basis functions to extend capacity by exploiting all available degrees of freedom (e.g., temporal, frequency and spatial dimesions), or by employing orthogonal multi-code operation in parallel, such as used in the latest 3GPP and 3GPP2 Wide-band Code Division Multiple Access (WCDMA) modulation standards by employing sets of orthogonal Walsh codes to improve the overall data throughput capacity. The generic Direct Sequence Spread Spectrum (DSSS) transmitter developed in this dissertation has originally been designed and implemented to investigate the practicality and usefulness of complex spreading sequences, and secondly, to verify the concept of non-linearly interpolated root-of-unity (NLI-RU) filtering. It was found that both concepts have a large potential for application in point-to-point, and particularly micro-cellular Wireless Local Area Networks (WLANs) and Wireless-Local-Loop (WLL) environments. Since then, several novel concepts and subsystems have been added to the original system, some of which have been patented both locally and abroad, and are outlined below. Consequently, the ultimate goal of this research project was to apply the principles of the generic DSSS transmitter and receiver developed in this study in the implementation of a WLL radio-frequency (RF)-link, and particularly towards the establishment of affordable wireless multimedia services in rural areas. The extended coverage at exceptionally low power emission levels offered by the new design will be particularly useful in rural applications. The proposed WLL concept can for example also be utilized to add a unique mobility feature to for example existing Private Automatic Branch Exchanges (PABXs). The proposed system will in addition offer superior teletraffic capacity compared to existing micro-cellular technologies, e.g., the Digital European Cordless Telephony (DECT) system, which has been consider by Telkom for employment in rural areas. The latter is a rather outdated interim standard offering much lower spectral efficiency and capacity than competitive CDMA-solutions, such as the concept analyzed in this dissertation, which is based on the use of unique large families of spectrally well confined (i.e., band-limited) constant envelope (CE) complex spreading sequences (CSS) with superior correlation properties. The CE characteristic of the new spreading sequences furthermore facilitates the design of systems with superior power efficiency and exceptionally robust performance characteristics (much less spectral re-growth) compared to existing 2G and 3G modulation standards, in the presence of non-linear power amplification. This feature allows for a system with larger coverage for a given performance level and limited peak power, or alternatively, longer battery life for a given maximum communication distance and performance level, within a specified fixed spreading bandwidth. In addition, the possibility to extend the concept to orthogonal multi-code operation provides for comparable capacity to present 3G modulation standards, while still preserving superior power efficiency characteristics in non-linear power amplification. Conventional spread spectrum communication systems employ binary spreading sequences, such as Gold or Kasami sequences. The practical implementation of such a system is relatively simple. The design and implementation of a spread-spectrum communication system employing complex spreading sequences is however considerable more complex and has not been previously presented, nor been implemented in hardware. The design of appropriate code lock loops for CSS has led to a unique design with 3dB performance advantage compared to similar loops designed for binary spreading sequences. The theoretical analysis and simulation of such a system will be presented, with the primary focus on an efficient hardware implementation of all new concepts proposed, in the form of a WLL RF-link demonstrator. / Dissertation (MEng (Electronic Engineering))--University of Pretoria, 2007. / Electrical, Electronic and Computer Engineering / unrestricted
264

Propagation analysis of a 900 MHz spread spectrum centralized traffic signal control system.

Urban, Brian L. 05 1900 (has links)
The objective of this research is to investigate different propagation models to determine if specified models accurately predict received signal levels for short path 900 MHz spread spectrum radio systems. The City of Denton, Texas provided data and physical facilities used in the course of this study. The literature review indicates that propagation models have not been studied specifically for short path spread spectrum radio systems. This work should provide guidelines and be a useful example for planning and implementing such radio systems. The propagation model involves the following considerations: analysis of intervening terrain, path length, and fixed system gains and losses.
265

IMPACTO DE LOS CICLOS ECONÓMICOS EN EL MERCADO DE EUROBONOS

Guaita Martínez, José Manuel 30 March 2015 (has links)
La crisis actual ha afectado a todos los sectores incluyendo a las grandes empresas de la mayoría de países, por ello la importancia que tiene analizar el impacto que ha tenido ésta en cualquiera de las variables esenciales que afectan a las empresas, incluyendo la carga financiera y por ende, el spread o diferencial entre el tipo de interés fijado y el que se otorga a los activos financieros de mayor solvencia y garantía, normalmente, los bonos norteamericanos (Treasury Bills) y los alemanes (German benchmark). La presente Tesis pretende analizar, mediante el Análisis Envolvente de Datos, la eficiencia en las emisiones de eurobonos a tipo fijo en el periodo 2004-2012, valorando el impacto de la crisis financiera en dicho mercado. La Tesis se centra en la línea de investigación sobre el concepto de eficiencia en precios y su capacidad de absorber toda la información en los precios de forma instantánea. (Fama, 1970; Duarte y Mascareñas, 2013). La investigación gira en torno a la variable spread como output fundamental en los mercados financieros, un coste que los emisores de bonos deben intentar minimizar y que los equipos de colocación de las entidades de banca de inversión lo consideran un objetivo fundamental. Esta línea de investigación aplica la definición de eficiencia dentro de un contexto de una función input-output, estableciendo un nivel de inputs financieros y macroeconómicos y estudiando el spread obtenido en la emisión. A partir de dicho concepto de eficiencia, se determinaran que emisiones son eficientes como resultado de una minimización del spread dadas unas variables financieras concretas. A su vez, se ha estudiado, partiendo de trabajos que analizan los problemas de predicción financiera (Altman et at., 1994; Back et al., 1996; Puertas y Martí, 2013), la búsqueda de un modelo que obtenga una regla común que facilite la clasificación de nuevas observaciones a medida que se vayan presentando, utilizando una serie de ratios y características financieras de un conjunto de observaciones. En concreto, los trabajos de Bonilla et al. (2005 y 2006) utilizan modelos no paramétricos en la estimación del spread de una muestra de eurobonos a tipo fijo negociados durante el periodo 1995-1999, concluyendo la alta precisión de estas técnicas. Siguiendo esta línea de investigación, en esta Tesis se pretende realizar un análisis comparativo de diversos modelos de predicción con objeto de determinar cuál de ellos es capaz de estimar el spread minimizando el error cometido. Las técnicas no paramétricas utilizadas han sido la Regresión Localmente Ponderada (RLP) y los Arboles de Regresión (CART) y como paramétricos la estimación por Mínimos Cuadrados Ordinarios (MCO). Tras la elaboración de una base de datos compuesta por 12.490 eurobonos, los resultados muestran que en los años previos al inicio de la crisis el volumen de emisiones eficientes ha sido mayor, encontrándose las ineficientes muy próximas a la frontera. A partir de 2008 se aprecia un cambio drástico en las condiciones de este mercado, detectándose un alejamiento considerable y generalizado de la frontera de producción. El propio estudio del mercado de eurobonos ha permitido esclarecer el papel que han jugado las agencias de rating en la situación de riesgo que ha supuesto la crisis financiera y económica, teniendo su colofón en la primera demanda realizada por el Departamento de Justicia de EEUU a Standard & Poor’s, curiosamente una demanda que cubre el periodo desde septiembre de 2004 hasta octubre de 2007, coincidiendo con la fuerte apreciación del mercado inmobiliario en EEUU mostrando un claro conflicto de interés en su actuación. A todo ello hay que añadir la posición cada vez más relevante de los países asiáticos, materializado en incrementos significativos de eurobonos emitidos en yuanes, manifestándose las entidades chinas, a la cabeza del relevo de europeas y estadounidenses. Excepcionalmente, se han encontrado en el sector financiero emisiones pertenecientes a China, que consiguen situarse en la misma frontera eficiente que los de países desarrollados, consiguiendo tasas de crecimiento por encima de la media aunque pequeños en nominales, siendo un sector que cuenta con el apoyo y control total del Estado chino. Los resultados obtenidos muestran una ruptura en el mercado de eurobonos a raíz de la crisis financiera con un descenso generalizado en el nivel de eficiencia en todos los sectores, con una financiación más irregular y subidas en las primas de riesgo, excepto en el de los servicios financieros, con menores emisiones realizadas pero más eficientes. En prácticamente ninguno de los sectores analizados para ambos períodos se cumple la teoría económica que sustenta los mercados de capitales en términos de relación de eficiencia, spread, volumen de las emisiones, divisas y calificación crediticia. Por otro lado se produce una disminución acentuada de las emisiones situadas en paraísos fiscales. Además, se refleja con mayor profundidad la enorme crisis económica originada en EEUU, desapareciendo las emisiones estadounidenses y europeas del mercado de eurobonos, países centrados en reequilibrar los desequilibrios económicos y los déficits públicos y siendo sustituidas por los emergentes, el norte de Europa, Australia y Hong Kong. Los modelos de predicción concluyen un mejor comportamiento en 2009-2012, con un número menor de emisores (casi un 80% menos) pero más precisas y con menor coste. El sector energía es el más predecible por la propia estructura de la industria obteniéndose un mayor nivel de eficiencia. Con esta Tesis no se ha creado un nuevo paradigma pero si se ha estudiado un mercado financiero, el de eurobonos, para el periodo antes y durante la crisis, sin precedente en la investigación en las finanzas, con un análisis de eficiencia y un modelo de predicción que muestra unos resultados alejados del marco teórico económico pero que ayude a avanzar a las grandes empresas en su toma de decisiones en la búsqueda de la minimización de su carga financiera. / Guaita Martínez, JM. (2014). IMPACTO DE LOS CICLOS ECONÓMICOS EN EL MERCADO DE EUROBONOS [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/48486 / TESIS
266

Comparative Bearing Capacity Analysis of Spread Footing Foundation on Fractured Granites

Nandi, Arpita 01 August 2011 (has links)
It is evident from several studies that ultimate bearing capacities calculated by traditional methods are conservative and subjective. For large civil structures founded on spread footings, cost-effective and safer foundation could be achieved by adopting optimum ultimate bearing capacity values that are based on an objective and pragmatic analysis. There is a pressing need to modify the existing methods for accurate estimation of the bearing capacities of rocks for spread footings. In practice, foundation bearing capacities of rock masses are often estimated using the presumptive values from Building Officials Code Administrators, National Building Code, and methods adopted by the American Association of State Highway and Transportation Officials. However, the estimated values are often not realistic, and site-specific analyses are essential. In this study, geotechnical reports and drill-log data from successful geotechnical design projects founded on a wide range of granites in eastern Tennessee were consulted. Different published methods were used to calculate ultimate bearing capacity of rock mass. These methods included Peck, Hansen and Thornburn, Hoek and Brown, Army Corps of Engineers, Naval Facilities Engineering Command, and Terzaghi's general bearing capacity equations. Wide variation was observed in the calculated ultimate bearing capacity values, which ranged over about two orders of magnitude. Only two of the methods provided realistic results when validated with plate-load test data from similar rocks.
267

Learning, Price Formation and the Early Season Bias in the NBA

Baryla, Edward A., Borghesi, Richard A., Dare, William H., Dennis, Steven A. 01 September 2007 (has links)
We test the NBA betting market for efficiency and find that totals lines are significantly biased early each season, yet sides lines do not show a similar bias. While market participants generally force line movements in the correct direction from open to close, they do not fully remove the identified bias in totals lines. This inefficiency enables a profitable technical trading strategy, as the resulting win rate of our proposed simple betting strategy against the closing totals line is 56.72%.
268

Numerical Study of Fire Spread Between Thin Parallel Samples in Microgravity

van den Akker, Enna Chia 23 May 2022 (has links)
No description available.
269

FÖRSKOLOR UNDER PANDEMISKT UTBROTT : Hygien- och smittskyddsarbete / Preschools during pandemic outbreak : Hygiene and infection protection work

Hamiroune, Sofiane January 2020 (has links)
Viral infections account about 80 % of all infections that preschool children suffer from. Many researches have shown that infectious diseases spread easily in environments where many individuals live in a limited space at the same time. Coronavirus disease 2019 (Covid19) is a new virus discovered in Wuhan city of China, that can cause serious respiratory problems and the infection can easily spread between people. The Swedish Public Health Authority has developed hygiene recommendations to reduce the risks of infection and limit the spread of the coronavirus by having good hygiene routines as well as social distance. The purpose of my study was to highlight hygiene and infection prevention work in preschools from different parts of Sweden. The study examined similarities and differences between the southern, the central and the northern part of the country regarding hygiene work in preschools during the pandemic. To answer the questions, a survey about hygiene work was sent to pedagogues at preschools in different municipalities. The results of the survey showed both similarities and differences between preschools in different parts of the country and indicated that the knowledge among pedagogues is high when it comes to general hygiene practices. The study showed deficiencies in the use of disinfectant by preschool children. Maintaining the same level of hygiene in preschools may have good effects and can reduce infections caused by pathogenic microorganisms other than coronavirus
270

Budget d’erreur en optique adaptative : Simulation numérique haute performance et modélisation dans la perspective des ELT / Adaptive optics error breakdown : high performance numerical simulation and modeling for future ELT

Moura Ferreira, Florian 11 October 2018 (has links)
D'ici quelques années, une nouvelle classe de télescopes verra le jour : celle des télescopes géants. Ceux-ci se caractériseront par un diamètre supérieur à 20m, et jusqu'à 39m pour le représentant européen, l'Extremely Large Telescope (ELT). Seulement, l'atmosphère terrestre vient dégrader sévèrement les images obtenues lors d'observations au sol : la résolution de ces télescopes est alors réduite à celle d'un télescope amateur de quelques dizaines de centimètres de diamètre.L'optique adaptative (OA) devient alors essentielle. Cette dernière permet de corriger en temps-réel les perturbations induites par l'atmosphère et de retrouver la résolution théorique du télescope. Néanmoins, les systèmes d'OA ne sont pas exempt de tout défaut, et une erreur résiduelle persiste sur le front d'onde (FO) et impacte la qualité des images obtenues. Cette dernière est dépendante de la Fonction d'Étalement de Point (FEP) de l'instrument utilisé, et la FEP d'un système d'OA dépend elle-même de l'erreur résiduelle de FO. L'identification et la compréhension des sources d'erreurs est alors primordiale. Dans la perspective de ces télescopes géants, le dimensionnement des systèmes d'OA nécessaires devient tel que ces derniers représentent un challenge technologique et technique. L'un des aspects à considérer est la complexité numérique de ces systèmes. Dès lors, les techniques de calcul de haute performance deviennent nécessaires, comme la parallélisation massive. Le General Purpose Graphical Processing Unit (GPGPU) permet d'utiliser un processeur graphique à cette fin, celui-ci possédant plusieurs milliers de coeurs de calcul utilisables, contre quelques dizaines pour un processeur classique.Dans ce contexte, cette thèse s'articule autour de trois parties. La première présente le développement de COMPASS, un outil de simulation haute performance bout-en-bout dédié à l'OA, notamment à l'échelle des ELT. Tirant pleinement parti des capacités de calcul des GPU, COMPASS permet alors de simuler une OA ELT en quelques minutes. La seconde partie fait état du développement de ROKET : un estimateur complet du budget d'erreur d'un système d'OA intégré à COMPASS, permettant ainsi d'étudier statistiquement les différentes sources d'erreurs et leurs éventuels liens. Enfin, des modèles analytiques des différentes sources d'erreur sont dérivés et permettent de proposer un algorithme d'estimation de la FEP. Les possibilités d'applications sur le ciel de cet algorithme sont également discutées. / In a few years, a new class of giants telescopes will appear. The diameter of those telescope will be larger than 20m, up to 39m for the european Extremely Large Telescope (ELT). However, images obtained from ground-based observations are severely impacted by the atmosphere. Then, the resolution of those giants telescopes is equivalent to the one obtained with an amateur telescope of a few tens of centimeters of diameter.Therefore, adaptive optics (AO) becomes essential as it aims to correct in real-time the disturbance due to the atmospherical turbulence and to retrieve the theoretical resolution of the telescope. Nevertheless, AO systems are not perfect: a wavefront residual error remains and still impacts the image quality. The latter is measured by the point spread function (PSF) of the system, and this PSF depends on the wavefront residual error. Hence, identifying and understanding the various contributors of the AO residual error is primordial.For those extremely large telescopes, the dimensioning of their AO systems is challenging. In particular, the numerical complexity impacts the numerical simulation tools useful for the AO design. High performance computing techniques are needed, as such relying on massive parallelization.General Purpose Graphical Processing Unit (GPGPU) enables the use of GPU for this purpose. This architecture is suitable for massive parallelization as it leverages GPU's several thousand of cores, instead of a few tens for classical CPU.In this context, this PhD thesis is composed of three parts. In the first one, it presents the development of COMPASS : a GPU-based high performance end-to-end simulation tool for AO systems that is suitable for ELT scale. The performance of the latter allows simulating AO systems for the ELT in a few minutes. In a second part, an error breakdown estimation tool, ROKET, is added to the end-to-end simulation in order to study the various contributors of the AO residual error. Finally, an analytical model is proposed for those error contributors, leading to a new way to estimate the PSF. Possible on-sky applications are also discussed.

Page generated in 0.0566 seconds