• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 60
  • 27
  • 10
  • 9
  • 6
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 146
  • 35
  • 21
  • 21
  • 21
  • 20
  • 19
  • 18
  • 16
  • 15
  • 14
  • 14
  • 13
  • 13
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Identification paramétrique en dynamique transitoire : traitement d’un problème couplé aux deux bouts / Parametric identification in transiant dynamic : traitment of a boundary value problem

Nouisri, Amine 18 November 2015 (has links)
Les travaux de thèse portent sur l'identification paramétrique en dynamique transitoire à partir des mesures fortement bruitées, l'un des objectifs à long terme étant de proposer une méthode d’identification peu intrusive afin de pouvoir être implémentée dans des codes de calcul éléments finis commerciaux. Dans ce travail, le concept de l'erreur en relation de comportement modifiée a été retenu pour traiter le problème d’identification des paramètres matériau. La minimisation de la fonctionnelle coût sous contraintes débouche, dans le cas de la dynamique transitoire, sur un problème dit « aux deux bouts » dans lequel il s’agit de résoudre un problème différentiel spatio-temporel avec des conditions à la fois initiales et finales en temps. Il en résulte un problème couplé entre les champs direct et adjoint dont le traitement est délicat. Dans un premier temps, des méthodes précédemment développées telles que la « méthode de Riccati » et la « méthode de tirs » ont été étudiées. Il est montré que l’identification par ces méthodes est robuste même pour des mesures fortement corrompues, mais qu’elles sont limitées par la complexité d’implémentation dans un code industriel, des problèmes de conditionnement ou de coût de calcul. Dans un second temps, une approche itérative basée sur une méthode de sur-relaxation a été développée et comparée à celles précédemment mentionnées sur des exemples académiques, validant l’intérêt de cette nouvelle approche. Enfin, des comparaisons ont été menées entre cette technique et une variante « discrétisée » de la formulation introduite par Bonnet et Aquino [Inverse Problems, vol. 31, 2015]. / This thesis deals with parameters identification in transient dynamic in case of highly noisy experimental data. One long-term goal is the derivation of a non-intrusive method dedicated to the implementation in a commercial finite element code.In this work, the modified error in the constitutive relation framework is used to treat the identification of material parameters. The minimization of the cost function under constraints leads, in the case of transient dynamics, to a « two points boundary value problem » in which the differential space-time problem involves both initial and final time conditions. This results in a problem coupling the direct and adjoint fields, whose treatment is difficult.In the first part, methods such as those based on the « Riccati equations » and the « shooting methods » have been studied. It is shown that the identification is robust even in the case of highly corrupted measures, but these methods are limited either by the implementation intrusiveness, conditioning problems or the numerical cost.In the second part, an iterative over-relaxation approach is developed and compared to the aforementioned approaches on academic problems in order to validate the interest of the method. Finally, comparisons are carried out between this approach and a « discretized » variation of the formulation introduced by Bonnet and Aquino [Inverse Problems, vol. 31, 2015].
42

On GPU Assisted Polar Decoding : Evaluating the Parallelization of the Successive Cancellation Algorithmusing Graphics Processing Units / Polärkodning med hjälp av GPU:er : En utvärdering av parallelliseringmöjligheterna av SuccessiveCancellation-algoritmen med hjälp av grafikprocessorer

Nordqvist, Siri January 2023 (has links)
In telecommunication, messages sent through a wireless medium often experience noise interfering with the signal in a way that corrupts the messages. As the demand for high throughput in the mobile network is increasing, algorithms that can detectand correct these corrupted messages quickly and accurately are of interest to the industry. Polar codes have been chosen by the Third Generation Partnership Project as the error correction code for 5G New Radio control channels. This thesis work aimed to investigate whether the polar code Successive Cancellation (SC) could be parallelized and if a graphics processing unit (GPU) can be utilized to optimize the execution time of the algorithm. The polar code Successive Cancellation was enhanced by implementing tree pruning and support for GPUs to leverage their parallelization. The difference in execution time between the concurrent and sequential versions of the SC algorithm with and without tree pruning was evaluated. The tree pruning SC algorithm almost always offered shorter execution times than the SC algorithm that did not employ treepruning. However, the support for GPUs did not reduce the execution time in these tests. Thus, the GPU is not certain to be able to improve this type of enhanced SC algorithm based on these results. / Meddelanden som överförs över ett mobilt nät utsätts ofta för brus som distorterar dem. I takt med att intresset ökat för hög genomströmning i mobilnätet har också intresset för algoritmer som snabbt och tillförlitligt kan upptäcka och korrigera distorderade meddelanden ökat. Polarkoder har valts av "Third Generation Partnership Project" som den klass av felkorrigeringskoder som ska användas för 5G:s radiokontrollkanaler. Detta examensarbete hade som syfte att undersöka om polarkoden "Successive Cancellation" (SC) skulle kunna parallelliseras och om en grafisk bearbetningsenhet (GPU) kan användas för att optimera exekveringstiden för algoritmen. SC utökades med stöd för trädbeskärning och parallellisering med hjälp av GPU:er. Skillnaden i exekveringstid mellan de parallella och sekventiella versionerna av SC-algoritmen med och utan trädbeskärning utvärderades. SC-algoritmen för trädbeskärning erbjöd nästan alltid kortare exekveringstider än SC-algoritmen som inte använde trädbeskärning. Stödet för GPU:er minskade dock inte exekveringstiden. Således kan man med dessa resultat inte med säkerhet säga att GPU-stöd skulle gynna SC-algoritmen.
43

Sur la résolution des équations intégrales singulières à noyau de Cauchy / [For solving Cauchy singular integral equations]

Mennouni, Abdelaziz 27 April 2011 (has links)
L'objectif de ce travail est la résolution des équations intégrales singulières à noyau Cauchy. On y traite les équations singulières de Cauchy de première espèce par la méthode des approximations successives. On s'intéresse aussi aux équations intégrales à noyau de Cauchy de seconde espèce, en utilisant les polynômes trigonométriques et les techniques de Fourier. Dans la même perspective, on utilise les polynômes de Tchebychev de quatrième degré pour résoudre une équation intégro différentielle à noyau de Cauchy. Ensuite, on s'intéresse à une autre équation intégro-différentielle à noyau de Cauchy, en utilisant les polynômes de Legendre, ce qui a donné lieu à développer deux méthodes basées sur une suite de projections qui converge simplement vers l'identité. En outre, on exploite les méthodes de projection pour les équations intégrales avec des opérateurs intégraux bornés non compacts et on a appliqué ces méthodes à l'équation intégrale singulière à noyau de Cauchy de deuxième espèce / The purpose of this thesis is to develop and illustrate various new methods for solving many classes of Cauchy singular integral and integro-differential equations. We study the successive approximation method for solving Cauchy singular integral equations of the first kind in the general case, then we develop a collocation method based on trigonometric polynomials combined with a regularization procedure, for solving Cauchy integral equations of the second kind. In the same perspective, we use a projection method for solving operator equation with bounded noncompact operators in Hilbert spaces. We apply a collocation and projection methods for solving Cauchy integro-differential equations, using airfoil and Legendre polynomials
44

The Use of Mental Imagery in Improving the Simultaneous and Successive Processing Abilities of Grade V Learners with Learning Disorders of Reading and Written Expression

Els, Karen 16 February 2007 (has links)
Student Number : 9702858G - MA research report - School of Human and Community Development - Faculty of Humanities / This study forms part of a series of studies on the use of mental imagery in learning. Preliminary data suggests that high mental imagery techniques are as effective as phonological based techniques in the remediation of the English language abilities of learners with difficulties in reading and written expression, and may lead to greater improvements where children have previously not learned using phonic approaches to learning to read, write and spell. Preliminary data further suggest that cognitive improvements, which cannot be explained purely by maturation factors, are also apparent as a result. The primary focus of this study was to investigate the effectiveness of high mental imagery techniques in improving the simultaneous and successive processing abilities of Grade V learners with learning disorders of reading and written expression. It also aimed to explore the usefulness of mental imagery techniques in improving the English spelling, reading and writing abilities of these learners. Eight Grade V learners attending a remedial primary school were selected to participate in this study. These learners were those who, in view of their scholastic history, were considered to be ‘treatment resisters’, implying that they had progressed poorly and had not responded well to other forms of traditional remedial intervention received in improving their English language abilities. Each participant’s cognitive, spelling, reading and writing abilities were pre and post tested utilising various psycho-educational and cognitive psychological assessment tools and their phonic skills were analysed. The sample received six months of bi-weekly individual remedial tuition in accordance with the remedial intervention strategy of the study group to which the participants had been randomly assigned. Four participants were tutored via high mental imagery techniques (experimental group) and four participants tutored utilising a phonological approach, forming the contrast group. Aggregated case study methodology was utilised to analyse the data. The results of this pilot study suggest that high mental imagery techniques are useful in improving the successive and simultaneous processing abilities and reading, spelling and writing skills of learners suffering learning disorders of reading and written expression. It should be noted that statistical analysis of the results was not undertaken owing to the small numbers of participants comprising the sample. However, when results obtained were analysed on a case by case basis as well as through aggregated case contrasts, there were strong indications to suggest that the gains made by the those participants tutored using high mental imagery techniques exceeded those of participants tutored in phonological techniques.
45

Low power SAR analog-to-digital converter for internet-of-things RF receivers / Conversor analógico-digital SAR de baixo consumo para receptores RF de internet-das-coisas

Dornelas, Helga Uchoa January 2018 (has links)
The "Internet of Things" (IoT) has been a topic of intensive research in industry, technological centers and academic community, being data communication one aspect of high relevance in this area. The exponential increase of devices with wireless capabilities as well as the number of users, alongside with the decreasing costs for implementation of broadband communications, created a suitable environment for IoT applications. An IoT device is typically composed by a wireless transceiver, a battery and/or energy harvesting unit, a power management unit, sensors and conditioning unit, a microprocessor and data storage unit. Energy supply is a limiting factor in many applications and the transceiver usually demands a significant amount of power. In this scenario the emerging wireless communication standard IEEE 802.11ah, in which this work focuses, was proposed as an option for low power sub-GHz radio communication. A typical architecture of modern radio receivers contains the analog radio-frequency (RF) front-end, which amplifies, demodulates and filters the input signal, and also analog-to-digital converters (ADC), that translate the analog signals to the digital domain. Additionally, the Successive-Approximation (SAR) ADC architecture has become popular recently due to its power efficiency, simplicity, and compatibility with scaled-down integrated CMOS technology. In this work, the RF receiver architecture and its specifications aiming low power consumption and IEEE 802.11ah standard complying are outlined, being the basis to the proposition of an 8-bit resolution and 10 MHz sampling rate ADC. A power efficient switching scheme for the charge redistribution SAR ADC architecture is explored in detail, along with the circuit-level design of the digital-to-analog converter (DAC). The transistor-level design of the two remaining ADC main blocks, sampling switch and comparator, are also explored. Electrical simulation of the physical layout, including parasitics, at a 130nm CMOS process resulted in a SINAD of 47:3 dB and 45:5 dB and at the receiver IF 3 MHz and at the Nyquist rate, respectively, consuming 21 W with a power supply of 1 V . The SAR ADC resulting Figure-of-Merit (FoM) corresponded to 11:1 fJ/conv-step at IF, and 13:7 fJ/conv-step at the Nyquist rate.
46

Estudo da operação otimizada aplicada a um sistema de reservatórios destinado à geração de energia elétrica / Optimized operation study applied to a hydropower reservoir system

Nascimento, Luiz Sérgio Vasconcelos do 28 April 2006 (has links)
Uma das aplicações mais importantes da análise de sistemas no planejamento de recursos hídricos diz respeito à determinação de estratégias operacionais de sistemas de múltiplos reservatórios, elementos indispensáveis aos aproveitamentos hídricos, cuja operação é alvo de análises que podem envolver muitas restrições e variáveis de decisão. Fica evidenciada, portanto, a necessidade de a operação destes ser otimizada, propiciando assim, o seu melhor aproveitamento, com o menor custo para a sociedade. A presente pesquisa estuda a operação otimizada de um sistema de reservatórios destinado a geração de energia elétrica, usando um modelo híbrido composto de algoritmos genéticos e o SIMPLEX de Nelder e Mead acoplado à programação linear sucessiva. Em conformidade com a recente proposta de Reis et al. (2005), o problema de otimização é resolvido através da decomposição em subproblemas seqüenciais independentes relativos a cada estágio de operação, conectados entre si por supor que os volumes dos reservatórios no final de cada estágio correspondam ao estado do sistema no início do estágio subseqüente. Para estimular a utilização mais eficiente dos volumes armazenados, no suprimento das demandas hídricas dos estágios futuros, são aplicados fatores de redução de custo (FRCs) sobre os volumes armazenados remanescentes no final de cada estágio / One of the most important uses for system analysis in water resources planning is the determination of the operational strategy for multiple reservoir systems, fundamental to better water supply, whose operation is the purpose of analysis that may involve many operation constraints and decision variables. Stay clear, so, the need of optimize their operation, creating in this manner, its best utilization with the less cost to society. This research on the optimal operation of a reservoir system has hydropower generation as its main objective. The optimization framework employs a hybrid model which corresponds to a combination of genetic algorithms and SIMPLEX of Nelder e Mead before employing successive linear programming. Accordant to recent Reis et. al (2005) proposal, the problem of optimizing is solved through decomposition in independents sequential sub problems related to each stage of operation, connected among themselves by supposing that reservoir storage at the end of each stage corresponds to the system state at the beginning of the subsequent stage. To promote the most efficient use of storage for water supply in future stages, FRC (cost reduction factors) are applied on the storage left at the end of each stage
47

Recursive Methods in Number Theory, Combinatorial Graph Theory, and Probability

Burns, Jonathan 07 July 2014 (has links)
Recursion is a fundamental tool of mathematics used to define, construct, and analyze mathematical objects. This work employs induction, sieving, inversion, and other recursive methods to solve a variety of problems in the areas of algebraic number theory, topological and combinatorial graph theory, and analytic probability and statistics. A common theme of recursively defined functions, weighted sums, and cross-referencing sequences arises in all three contexts, and supplemented by sieving methods, generating functions, asymptotics, and heuristic algorithms. In the area of number theory, this work generalizes the sieve of Eratosthenes to a sequence of polynomial values called polynomial-value sieving. In the case of quadratics, the method of polynomial-value sieving may be characterized briefly as a product presentation of two binary quadratic forms. Polynomials for which the polynomial-value sieving yields all possible integer factorizations of the polynomial values are called recursively-factorable. The Euler and Legendre prime producing polynomials of the form n2+n+p and 2n2+p, respectively, and Landau's n2+1 are shown to be recursively-factorable. Integer factorizations realized by the polynomial-value sieving method, applied to quadratic functions, are in direct correspondence with the lattice point solutions (X,Y) of the conic sections aX2+bXY +cY2+X-nY=0. The factorization structure of the underlying quadratic polynomial is shown to have geometric properties in the space of the associated lattice point solutions of these conic sections. In the area of combinatorial graph theory, this work considers two topological structures that are used to model the process of homologous genetic recombination: assembly graphs and chord diagrams. The result of a homologous recombination can be recorded as a sequence of signed permutations called a micronuclear arrangement. In the assembly graph model, each micronuclear arrangement corresponds to a directed Hamiltonian polygonal path within a directed assembly graph. Starting from a given assembly graph, we construct all the associated micronuclear arrangements. Another way of modeling genetic rearrangement is to represent precursor and product genes as a sequence of blocks which form arcs of a circle. Associating matching blocks in the precursor and product gene with chords produces a chord diagram. The braid index of a chord diagram can be used to measure the scope of interaction between the crossings of the chords. We augment the brute force algorithm for computing the braid index to utilize a divide and conquer strategy. Both assembly graphs and chord diagrams are closely associated with double occurrence words, so we classify and enumerate the double occurrence words based on several notions of irreducibility. In the area of analytic probability, moments abstractly describe the shape of a probability distribution. Over the years, numerous varieties of moments such as central moments, factorial moments, and cumulants have been developed to assist in statistical analysis. We use inversion formulas to compute high order moments of various types for common probability distributions, and show how the successive ratios of moments can be used for distribution and parameter fitting. We consider examples for both simulated binomial data and the probability distribution affiliated with the braid index counting sequence. Finally we consider a sequence of multiparameter binomial sums which shares similar properties with the moment sequences generated by the binomial and beta-binomial distributions. This sequence of sums behaves asymptotically like the high order moments of the beta distribution, and has completely monotonic properties.
48

Design and Evaluation of an Ultra-Low Power Successive Approximation ADC

Zhang, Dai January 2009 (has links)
<p>Analog-to-digital converters (ADC) targeted for use in medical implant devices serve an important role as the interface between analog signal and digital processing system. Usually, low power consumption is required for a long battery lifetime. In such application which requires low power consumption and moderate speed and resolution, one of the most prevalently used ADC architectures is the successive approximation register (SAR) ADC.This thesis presents a design of an ultra-low power 9-bit SAR ADC in 0.13μm CMOS technology. Based on a literature review of SAR ADC design, the proposed SAR ADC combines a capacitive DAC with S/H circuit, uses a binary-weighted capacitor array for the DAC and utilizes a dynamic latch comparator. Evaluation results show that at a supply voltage of 1.2V and an output rate of 1kS/s, the SAR ADC performs a total power consumption of 103nW and a signal-to-noise-and-distortion ratio of 54.4dB. Proper performance is achieved down to a supply voltage of 0.45V, with a power consumption of 16nW.</p>
49

Solution and melt behaviour of high-density polyethylene - Successive Solution Fractionation mechanism - Influence of the molecular structure on the flow

Stephenne, Vincent 26 August 2003 (has links)
SOLUTION AND MELT BEHAVIOUR OF HIGH-DENSITY POLYETHYLENE - Successive Solution Fractionation mechanism - Influence of the molecular structure on the flow In the field of polyethylene characterization, one of the most challenging research topic is certainly an accurate molecular structure determination of industrial products, in terms of molar mass distribution (MMD), corresponding average-molar masses and molecular architecture (branching nature, content and heterogeneity). Solution to this long-term problem necessarily calls for a multi-disciplinary approach. Therefore, respective advantages of molecular structure characterization in solution and in the melt are exploited. In solution, chromatographic and spectroscopic methods allow determination of MMD, average branching content and intermolecular heterogeneity within their detection limits. Rheological testing in the melt could be a very powerful molecular structure investigation tool, due to its extreme sensitivity to high molar mass (MM) tailing or long chain branching (LCB) traces. But when the rheological tests results are in hand, we often still wonder what kind of molecular structure gives rise to such results. Indeed, melt signal depends on MM, MMD and LCB presence. MMD determination and LCB quantification by melt approach is impossible as long as respective effects of these molecular parameters are not clearly quantified. The general purpose of the present work is to contribute to a better molecular structure characterization of high-density polyethylene by developing, in a first time, a preparative fractionation method able to provide narrow-disperse linear and long chain branched samples, essential to separate concomitant effects of MM, MMD and LCB on rheological behaviour. Once such model fractions isolated, influence of MM and LCB on both shear and elongational flow behaviours in the melt is studied. /Dans le domaine du polyéthylène, un des sujets de recherche les plus investigués à l'heure actuelle est la détermination précise de la structure moléculaire de résines industrielles, en termes de distribution des masses molaires (MMD), de masses molaires moyennes correspondantes et d'architecture moléculaire (nature, teneur et hétérogénéité). La résolution de cette problématique nécessite une approche multi-disciplinaire, afin d' exploiter simultanément les avantages d'une caractérisation en solution et à l'état fondu. En solution, certaines méthodes chromatographiques et spectroscopiques permettent de déterminer une MMD, une teneur moyenne en branchement et leur distribution, dans leurs limites de détection. La mesure du comportement rhéologique à l'état fondu pourrait s'avérer un formidable outil de caractérisation de la structure moléculaire en raison de son extrême sensibilité à certains détails moléculaires, tels que la présence de traces de LCB ou de très hautes masses molaires (MM). Malheureusement, le signal rhéologique dépend de manière conjointe de la MM, MMD et de la présence ou non de LCB, de telle sorte que la détermination d'une MMD ou d'une teneur en LCB par cette voie est impossible aussi longtemps que les effets respectifs de ces paramètres moléculaires sur le comportement rhéologique n'ont pas été clairement et distinctement établis. L'objectif global de cette thèse est de contribuer à une meilleure caractérisation de la structure moléculaire du polyéthylène haute densité en développant, dans un premier temps, une méthode préparative de fractionnement capable de produire des échantillons, linéaires ou branchés, à MMD la plus étroite possible, indispensables en vue de séparer les effets concomitants de la MM, MMD et LCB sur le comportement rhéologique à l'état fondu. Une fois de tels objets modèles isolés, l'influence de la MM et du LCB sur le comportement rhéologique, en cisaillement et en élongation, sera étudié.
50

Logical Superposition Coded Modulation for Wireless Video Multicasting

Ho, James Ching-Chih January 2009 (has links)
This thesis documents the design of logical superposition coded (SPC) modulation for implementation in wireless video multicast systems, to tackle the issues caused by multi-user channel diversity, one of the legacy problems due to the nature of wireless video multicasting. The framework generates a logical SPC modulated signal by mapping successively refinable information bits into a single signal constellation with modifications in the MAC-layer software. The transmitted logical SPC signals not only manipulatively mimic SPC signals generated by the superposition of multiple modulated signals in the conventional hardware-based SPC modulation, but also yield comparable performance gains when provided with the knowledge of information bits dependencies and receiver channel distributions. At the receiving end, the proposed approach only requires simple modifications in the MAC layer software, which demonstrates full decoding compatibility with the conventional multi-stage signal-interference cancellation (SIC) approach involving additional hardware devices. Generalized formulations for symbol error rate (SER) are derived for performance evaluations and comparisons with the conventional hardware-based approach.

Page generated in 0.0484 seconds