• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 228
  • 78
  • 38
  • 24
  • 20
  • 18
  • 10
  • 6
  • 6
  • 5
  • 4
  • 4
  • 2
  • 2
  • 1
  • Tagged with
  • 544
  • 77
  • 65
  • 64
  • 60
  • 59
  • 51
  • 51
  • 48
  • 47
  • 42
  • 39
  • 37
  • 37
  • 36
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
361

Characteristics of the log periodic dipole array

Onwuegbuna, Leonard Ikemefuna 28 February 2007 (has links)
Student Number : 9713144D - MSc Dissertation - School of Electrical Engineering - Faculty of Engineering and the Built Environment / The performance of the Log Periodic dipole array antenna has been characterized, in the form of parametric curves available in most antenna design handbooks and other relevant literature. These characteristic curves are often limiting in scope, as for instance they do not contain parametric curves giving the relationship between the boom-length 'L' and the number of dipole element 'N' for any given bandwidth, even when it is known that these two parameters are the main cost determinants of a LPDA Antenna. The concept of convergence is introduced to aid cost optimization of the LPDA Antenna in terms of number of dipole element 'N'. Although 'N' is used as the minimization criterion, the criteria for establishing convergence encompass all the main electrical characteristics of the LPDA Antenna, such as VSWR, gain and radiation patterns. Lastly, the effects of boomimpedance 'Zo' and length to diameter ration 'Ln/Dn', on the performance characteristics of the LPDA Antenna was investigated with the view to determining if neglecting the effects of these two parameters were responsible for the disparity in the directive gain values obtained by R. L Carrel compared to those obtained by later researchers. The investigation indicates that if an LPDA Antenna is converged, then the effects of Zo and Ln/Dn ratio though significant can not alone account for the fairly large disparity in the gain values. In other to perform these investigations, a modern scientific tool in the form of numerical modeling by method of moments based, Super Numerical electromagnetic code version2 was utilized. The numerical modeling tool was first validated by agreement between measured values and the values as predicted by the modeling tool. Next, simulation of the performance of LPDA antennas under variations of their number of elements was done. Thereafter, the means and standard deviations of the gain were extracted from the simulated numerical models. Trends in the pattern of variation of the means and standard deviations of the gain are used as the basis for deciding the value of number of element at which the antenna can yield acceptable performance (convergence criteria). These are presented as convergence curves, which gives for any given boom-length and operating bandwidth, the minimum number of elements required for the antenna to yield acceptable performance. Finally, the effect of length to diameter ratio and boom-impedance on the gain of optimized LPDA antennas are presented as parametric curves.
362

Hybrid metaheuristic algorithms for sum coloring and bandwidth coloring / Métaheuristiques hybrides pour la somme coloration et la coloration de bande passante

Jin, Yan 29 May 2015 (has links)
Le problème de somme coloration minimum (MSCP) et le problème de coloration de bande passante (BCP) sont deux généralisations importantes du problème de coloration des sommets classique avec de nombreuses applications dans divers domaines, y compris la conception de circuits imprimés, la planication, l’allocation de ressource, l’affectation de fréquence dans les réseaux mobiles, etc. Les problèmes MSCP et BCP étant NP-difficiles, les heuristiques et métaheuristiques sont souvent utilisées en pratique pour obtenir des solutions de bonne qualité en un temps de calcul acceptable. Cette thèse est consacrée à des métaheuristiques hybrides pour la résolution efcace des problèmes MSCP et BCP. Pour le problème MSCP, nous présentons deux algorithmes mémétiques qui combinent l’évolution d’une population d’individus avec de la recherche locale. Pour le problème BCP, nous proposons un algorithme hybride à base d’apprentissage faisant coopérer une méthode de construction “informée” avec une procédure de recherche locale. Les algorithmes développés sont évalués sur des instances biens connues et se révèlent très compétitifs par rapport à l’état de l’art. Les principaux composants des algorithmes que nous proposons sont également analysés. / The minimum sum coloring problem (MSCP) and the bandwidth coloring problem (BCP) are two important generalizations of the classical vertex coloring problem with numerous applications in diverse domains, including VLSI design, scheduling, resource allocation and frequency assignment in mobile networks, etc. Since the MSCP and BCP are NP-hard problems, heuristics and metaheuristics are practical solution methods to obtain high quality solutions in an acceptable computing time. This thesis is dedicated to developing effective hybrid metaheuristic algorithms for the MSCP and BCP. For the MSCP, we present two memetic algorithms which combine population-based evolutionary search and local search. An effective algorithm for maximum independent set is devised for generating initial solutions. For the BCP, we propose a learning-based hybrid search algorithm which follows a cooperative framework between an informed construction procedure and a local search heuristic. The proposed algorithms are evaluated on well-known benchmark instances and show highly competitive performances compared to the current state-of-the-art algorithms from the literature. Furthermore, the key issues of these algorithms are investigated and analyzed.
363

Alocação dinâmica de largura de banda em redes EPON / Dynamic bandwidth allocation for EPON networks

Carrasco Arbieto, Carmen Orencia 10 August 2007 (has links)
As redes de telecomunicações se dividem em redes de longa distância, redes metropolitanas e redes de acesso. As redes de longa distância e metropolitana usufruem a alta capacidade de largura de banda da fibra óptica, enquanto nas redes de acesso há um gargalo de largura de banda por causa do uso de pares de fios e cabo coaxial. Para solucionar este problema e oferecer aos usuários acesso banda larga de baixo custo foram propostas as redes ópticas passivas (passive optical network - PON). A PON é formada por dois elementos básicos, a unidade de rede óptica (optical network unit - ONU), localizada perto dos assinantes, e o terminal de linha óptica (optical line terminal - OLT), localizado próximo ao provedor de serviços. Dentre os padrões disponíveis para redes PON, o Ethernet (EPON), padronizado pelo grupo IEEE 802.3ah, é opção atraente porque ele é bastante difundido nas redes locais. O protocolo de controle multiponto (multipoint control protocol - MPCP), já especificado, é responsável pelo controle de acesso ao meio, fornecendo infra-estrutura de sinalização para transmissão entre OLT e ONUs. Entretanto, o algoritmo de alocação de largura de banda, que realiza o controle de acesso ao meio com base no MPCP, foi considerado fora do escopo de trabalho do grupo de trabalho, permitindo que seja desenvolvido pelos fornecedores de equipamentos. Neste trabalho, arquiteturas de rede EPON e o protocolo MPCP são descritos e algoritmos de alocação de largura de banda são avaliados mediante simulação computacional. São abordados os algoritmos de alocação de largura de banda que integram multiplexação estatística e técnicas para o suporte a classes de serviços diferenciados, com base no esquema de multiplexação por divisão no tempo (time division multiplexing - TDM). Algoritmos que integram a multiplexação por divisão em comprimento de onda (wavelength division multiplexing - WDM) à arquitetura EPON TDM são também investigados. Os algoritmos WDM-TDM permitem a atualização progressiva da EPON que emprega o esquema TDM para WDM. / Telecommunication networks are divided into core, metropolitan and access networks. The core and metropolitan networks use high capacity bandwidth optical fibers, while the access networks have bandwidth bottlenecks because of the use of twisted-pair wires and coaxial cable. To solve this problem and to offers the users broadband access at low cost the use of passive optical networks (PON) is proposed. A PON is formed by two basic elements, the optical network unit (ONU), positioned close to the customers and the optical line terminal (OLT), located close to the service provider. Within the available standards for PON networks, the Ethernet (EPON), standardised by the IEEE group 802.3ah, is an attractive option because it is already widely used in local networks. The multipoint control protocol (MPCP), already specified, is responsible for the media access control, providing signaling infrastructure for transmission between OLT and ONUs. However, the bandwidth allocation algorithm, that controls access based on MPCP, was considered outside the scope of the work group, permitting that this be developed by equipment providers. In this work, EPON architectures and the MPCP protocol are described and bandwidth allocation algorithms are evaluated with computational simulation. Bandwidth allocation algorithms which integrate statistical multiplexing and techniques to support for differentiated classes of service, based on time division multiplexing (TDM) scheme are investigated. Algorithms that integrate wavelength division multiplexing (WDM) to the EPON TDM architecture are also investigated. The WDM-TDM algorithms permit the progressive upgrade of EPON based TDM to WDM schemes.
364

Viscosimétrie ultrasonore ultra large bande / Ultra Large bandwidth ultrasonic viscometry

Mograne, Mohamed Abderrahmane 22 November 2018 (has links)
Cette thèse a pour objectif d’instrumenter un contenant familier dans le domaine du biomédical et de la chimie (un tube à essai) avec des éléments piézoélectriques à ondes longitudinales (L) et d’implémenter, en les optimisant, diverses méthodes ultrasonores pour mesurer les viscosités rapidement, sans changer de banc de mesure et cela de quelques Hz à plusieurs dizaines de mégahertz au voisinage de la température ambiante. Grâce au système mis en place il est possible en quelques minutes de déterminer le comportement rhéologique du liquide, étudié en mesurant sa viscosité de cisaillement. Par ailleurs, la gamme de viscosité atteinte est extrêmement large puisque les mesures sont possibles de quelques dizaines de mPa.s à plusieurs centaines de Pa.s. Enfin, au-delà de résultats quantitatifs en terme de viscosité, le banc de mesure peut être aussi utilisé pour suivre de façon qualitative des cinétiques de réaction (polymérisation par exemple). / The main goal of this thesis is to set specific piezoelectric elements emitting longitudinal waves (L) on a well-known container in the field of biomedical and chemistry (a test tube) and to implement with some optimizations various ultrasonic methods to measure viscosities quickly, without changing the measurement bench. The measurement has to be done from a few Hz to several tens of megahertz around room temperature. Up to now it is possible to determine in a few minutes the rheological behavior of the liquid studied thanks to the evaluation of its shear viscosity. Furthermore, the viscosity range reached is extremely wide: the measurements are possible from a few tens of mPa.s to several hundred Pa.s. Finally, beyond quantitative results in terms of viscosity, the measurement bench can also be used to qualitatively monitor reactions (polymerization for example).
365

Dynamic Bandwidth allocation algorithms for an RF on-chip interconnect / Allocation dynamique de bande passante pour l’interconnexion RF d’un réseau sur puce

Unlu, Eren 21 June 2016 (has links)
Avec l’augmentation du nombre de cœurs, les problèmes de congestion sont commencé avec les interconnexions conventionnelles. Afin de remédier à ces défis, WiNoCoD projet (Wired RF Network-on-Chip Reconfigurable-on-Demand) a été initié par le financement de l’Agence Nationale de Recherche (ANR). Ce travail de thèse contribue à WiNoCoD projet. Une structure de contrôleur de RF est proposé pour l’interconnexion OFDMA de WiNoCoD et plusieurs algorithmes d’allocation de bande passante efficaces (distribués et centralisés) sont développés, concernant les demandes et contraintes très spécifiques de l’environnement sur-puce. Un protocole innovante pour l’arbitrage des sous-porteuses pour des longueurs bimodales de paquets sur-puce, qui ne nécessite aucun signalisation supplémentaire est introduit. Utilisation des ordres de modulation élevés avec plus grande consommation d’énergie est évaluée. / With rapidly increasing number of cores on a single chip, scalability problems have arised due to congestion and latency with conventional interconnects. In order to address these issues, WiNoCoD project (Wired RF Network-on-Chip Reconfigurable-on-Demand) has been initiated by the support of French National Research Agency (ANR). This thesis work contributes to WiNoCoD project. A special RF controller structure has been proposed for the OFDMA based wired RF interconnect of WiNoCoD. Based on this architecture, effective bandwidth allocation algorithms have been presented, concerning very specific requirements and constraints of on-chip environment. An innovative subcarrier allocation protocol for bimodal packet lengths of cache coherency traffic has been presented, which is proven to decrease average latency significantly. In addition to these, effective modulation order selection policies for this interconnect have been introduced, which seeks the optimal delay-power trade-off.
366

Energy efficient wired networking

Chen, Xin January 2015 (has links)
This research proposes a new dynamic energy management framework for a backbone Internet Protocol over Dense Wavelength Division Multiplexing (IP over DWDM) network. Maintaining the logical IP-layer topology is a key constraint of our architecture whilst saving energy by infrastructure sleeping and virtual router migration. The traffic demand in a Tier 2/3 network typically has a regular diurnal pattern based on people‟s activities, which is high in working hours and much lighter during hours associated with sleep. When the traffic demand is light, virtual router instances can be consolidated to a smaller set of physical platforms and the unneeded physical platforms can be put to sleep to save energy. As the traffic demand increases the sleeping physical platforms can be re-awoken in order to host virtual router instances and so maintain quality of service. Since the IP-layer topology remains unchanged throughout virtual router migration in our framework, there is no network disruption or discontinuities when the physical platforms enter or leave hibernation. However, this migration places extra demands on the optical layer as additional connections are needed to preserve the logical IP-layer topology whilst forwarding traffic to the new virtual router location. Consequently, dynamic optical connection management is needed for the new framework. Two important issues are considered in the framework, i.e. when to trigger the virtual router migration and where to move virtual router instances to? For the first issue, a reactive mechanism is used to trigger the virtual router migration by monitoring the network state. Then, a new evolutionary-based algorithm called VRM_MOEA is proposed for solving the destination physical platform selection problem, which chooses the appropriate location of virtual router instances as traffic demand varies. A novel hybrid simulation platform is developed to measure the performance of new framework, which is able to capture the functionality of the optical layer, the IP layer data-path and the IP/optical control plane. Simulation results show that the performance of network energy saving depends on many factors, such as network topology, quiet and busy thresholds, and traffic load; however, savings of around 30% are possible with typical medium-sized network topologies.
367

Managing the memory hierarchy in GPUs

Dublish, Saumay Kumar January 2018 (has links)
Pervasive use of GPUs across multiple disciplines is a result of continuous adaptation of the GPU architectures to address the needs of upcoming application domains. One such vital improvement is the introduction of the on-chip cache hierarchy, used primarily to filter the high bandwidth demand to the off-chip memory. However, in contrast to traditional CPUs, the cache hierarchy in GPUs is presented with significantly different challenges such as cache thrashing and bandwidth bottlenecks, arising due to small caches and high levels of memory traffic. These challenges lead to severe congestion across the memory hierarchy, resulting in high memory access latencies. In memory-intensive applications, such high memory access latencies often get exposed and can no longer be hidden through multithreading, and therefore adversely impact system performance. In this thesis, we address the inefficiencies across the memory hierarchy in GPUs that lead to such high levels of congestion. We identify three major factors contributing to poor memory system performance: first, disproportionate and insufficient bandwidth resources in the cache hierarchy; second, poor cache management policies; and third, high levels of multithreading. In order to revitalize the memory hierarchy by addressing the above limitations, we propose a three-pronged approach. First, we characterize the bandwidth bottlenecks present across the memory hierarchy in GPUs and identify the architectural parameters that are most critical in alleviating congestion. Subsequently, we explore the architectural design space to mitigate the bandwidth bottlenecks in a cost-effective manner. Second, we identify significant inter-core reuse in GPUs, presenting an opportunity to reuse data among the L1s. We exploit this reuse by connecting the L1 caches with a lightweight ring network to facilitate inter-core communication of shared data. We show that this technique reduces traffic to the L2 cache, freeing up the bandwidth for other accesses. Third, we present Poise, a machine learning approach to mitigate cache thrashing and bandwidth bottlenecks by altering the levels of multi-threading. Poise comprises a supervised learning model that is trained offline on a set of profiled kernels to make good warp scheduling decisions. Subsequently, a hardware inference engine is used to predict good warp scheduling decisions at runtime using the model learned during training. In summary, we address the problem of bandwidth bottlenecks across the memory hierarchy in GPUs by exploring how to best scale, supplement and utilize the existing bandwidth resources. These techniques provide an effective and comprehensive methodology to mitigate the bandwidth bottlenecks in the GPU memory hierarchy.
368

Hierarquia de memória configurável para redução energética no codificador de vídeo HEVC / Configurable memory hierarchy for energy reduction in HEVC video encoder

Martins, Anderson da Silva 29 September 2017 (has links)
Submitted by Aline Batista (alinehb.ufpel@gmail.com) on 2018-04-18T14:40:46Z No. of bitstreams: 2 license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Dissertacao_Anderson_Martins.pdf: 8654389 bytes, checksum: f6e25bd57867fb8466bfe88dcf25afb3 (MD5) / Approved for entry into archive by Aline Batista (alinehb.ufpel@gmail.com) on 2018-04-19T14:42:52Z (GMT) No. of bitstreams: 2 Dissertacao_Anderson_Martins.pdf: 8654389 bytes, checksum: f6e25bd57867fb8466bfe88dcf25afb3 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) / Made available in DSpace on 2018-04-19T14:43:00Z (GMT). No. of bitstreams: 2 Dissertacao_Anderson_Martins.pdf: 8654389 bytes, checksum: f6e25bd57867fb8466bfe88dcf25afb3 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Previous issue date: 2017-09-29 / Sem bolsa / Dados recentes mostram que há uma demanda crescente de aplicações de vídeo em dispositivos móveis, sendo este um grande desafio para pesquisas em arquiteturas de codificadores de vídeo de alto desempenho como o padrão HEVC. Em um sistema embarcado o consumo de energia e o desempenho estão diretamente ligados ao sistema de memória. No codificador de vídeo não é diferente, e no HEVC a etapa de estimação de movimento (ME) é conhecida por ser responsável pela maior parte do tempo de processamento e acesso à memória. Portanto, este trabalho apresenta uma exploração do espaço de projeto para definir configurações de memória cache eficientes em energia para o processo da ME e, propor uma hierarquia de memória cache configurável, considerando diferentes sequências de vídeo e configurações do codificador HEVC. A avaliação considerou o algoritmo TZ Search, amplamente utilizado, 23 sequências de vídeo com resoluções distintas e quatro Parâmetros de Quantização (QPs) sob 32 configurações de cache diferentes. Um simulador de cache foi desenvolvido e a ferramenta CACTI foi utilizada para obter parâmetros de tempo e energia. Assim, foi possível identificar configurações de cache ótimas para cada cenário, visto que não existe uma única configuração de memória cache que satisfaça todos os cenários ao mesmo tempo quando o objetivo é redução de energia. Considerando a configuração ótima de cache para cada cenário, o uso de cache pode levar a uma economia de largura de banda da memória externa de até 97,37%, que corresponde a uma redução de 25,48GB/s para 548,53MB/s em um caso. A redução de energia chega a 93,95%, o que corresponde, uma redução de energia de 5,02mJ para 0,30mJ, ao comparar diferentes configurações de cache. Estes resultados possibilitaram propor uma hierarquia de memória cache configurável para o processo de estimação de movimento que é capaz de atender eficientemente todos os cenários testados. Para a arquitetura configurável proposta foram encontradas economia de energia de até 78,09% quando as configurações ótimas são comparadas com o pior caso dentro da cache configurável (16KB-8). Já quando comparada com Level-C, foram alcançadas economia de energia de até 86,91%. Além disso, a economia de largura de banda alcançada ficou entre 90,21% e 96,84% com uma média de 94,97%. / Recent data show that there is a growing demand for video applications on mobile devices, which is a major challenge for research into high performance video encoder architectures such as the HEVC standard. In an embedded system, power consumption and performance are directly connected to the memory system. In the video encoder it is no different, and in the HEVC the motion estimation (ME) step is known to be responsible for most of the processing time and memory access. Therefore, this work presents an exploration of the design space to define energy-efficient cache memory configurations for the ME process and propose a configurable cache memory hierarchy considering different video sequences and HEVC encoder configurations. The evaluation considered the widely used TZ Search algorithm, 23 video sequences with distinct resolutions, and four Quantization Parameters (QPs) under 32 different cache configurations. A cache simulator was developed and the CACTI tool was used to obtain time and energy parameters. Thus, it was possible to identify optimal cache configurations for each scenario, since there is no single cache configuration that satisfies all scenarios at the same time when the goal is to reduce power. Considering the optimal cache configuration for each scenario, cache usage can lead to external memory bandwidth savings of up to 97.37%, which corresponds to a reduction of 25.48GB/s to 548.53MB/s in one case. The energy reduction comes to 93.95%, which corresponds to an energy reduction of 5.02mJ to 0.30mJ when comparing different cache configurations. These results have made it possible to propose a configurable cache memory hierarchy for motion estimation process that is capable of efficiently satisfying all scenarios tested. For the proposed configurable architecture, energy savings of up to 78.09% were found when the optimal configurations were compared to the worst case within the configurable cache (16KB-8). When compared to Level-C, energy savings of up to 86.91% were achieved. In addition, the external memory bandwidth savings achieved was between 90.21% and 96.84% with an average of 94.97%.
369

Comprehensive Wide Bandwidth Test Battery of Auditory Function in Veterans

Schairer, Kim S., Feeney, M. Patrick, Keefe, D. H., Fitzpatrick, D., Putterman, D., Kolberg, Elizabeth 22 February 2016 (has links)
No description available.
370

A Comparison of Gain for Adults from Generic Hearing Aid Prescriptive Methods: Impacts on Predicted Loudness, Frequency Bandwidth, and Speech Intelligibility

Johnson, Earl E., Dillon, Harvey 01 July 2011 (has links)
Background: Prescriptive methods have been at the core of modern hearing aid fittings for the past several decades. Every decade or so, there have been revisions to existing methods and/or the emergence of new methods that become widely used. In 2001 Byrne et al provided a comparison of insertion gain for generic prescriptive methods available at that time. Purpose: The purpose of this article was to compare National Acoustic Laboratories—Non-linear 1 (NAL-NL1), National Acoustic Laboratories—Non-linear 2 (NAL-NL2), Desired Sensation Level Multistage Input/Output (DSL m[i/o]), and Cambridge Method for Loudness Equalization 2—High-Frequency (CAMEQ2-HF) prescriptive methods for adults on the amplification characteristics of prescribed insertion gain and compression ratio. Following the differences observed in prescribed insertion gain among the four prescriptive methods, analyses of predicted specific loudness, overall loudness, and bandwidth of cochlear excitation and effective audibility as well as speech intelligibility of the international long-term average speech spectrum (ILTASS) at an average conversational input level were completed. These analyses allow for the discussion of similarities and differences among the present-day prescriptive methods. Research Design: The impact of insertion gain differences among the methods is examined for seven hypothetical hearing loss configurations using models of loudness perception and speech intelligibility. Study Sample: Hearing loss configurations for adults of various types and degrees were selected, five of which represent sensorineural impairment and were used by Byrne et al; the other two hearing losses provide an example of mixed and conductive impairment. Data Collection and Analysis: Prescribed insertion gain data were calculated in 1/3-octave frequency bands for each of the seven hearing losses from the software application of each prescriptive method over multiple input levels. The insertion gain data along with a diffuse field-to-eardrum transfer function were used to calculate output levels at the eardrums of the hypothetical listeners. Levels of hearing loss and output were then used in the Moore and Glasberg loudness model and the ANSI S3.5-1997 Speech Intelligibility Index model. Results: NAL-NL2 and DSL m[i/o] provided comparable overall loudness of approximately 8 sones for the five sensorineural hearing losses for a 65 dB SPL ILTASS input. This loudness was notably less than that perceived by a normal-hearing person for the same input signal, 18.6 sones. NAL-NL2 and DSL m[i/o] also provided comparable predicted speech intelligibility in quiet and noise. CAMEQ2-HF provided a greater average loudness, similar to NAL-NL1, with more high-frequency bandwidth but no significant improvement to predicted speech intelligibility. Conclusions: Definite variation in prescribed insertion gain was present among the prescriptive methods. These differences when averaged across the hearing losses were, by and large, negligible with regard to predicted speech intelligibility at normal conversational speech levels. With regard to loudness, DSL m[i/o] and NAL-NL2 provided the least overall loudness, followed by CAMEQ2-HF and NAL-NL1 providing the most loudness. CAMEQ2-HF provided the most audibility at high frequencies; even so, the audibility became less effective for improving speech intelligibility as hearing loss severity increased.

Page generated in 0.0432 seconds