• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 45
  • 22
  • 13
  • 5
  • 4
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 103
  • 28
  • 25
  • 16
  • 13
  • 13
  • 12
  • 12
  • 11
  • 11
  • 11
  • 10
  • 9
  • 9
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Designing for Awareness and Accountability with Tangible Computing

Dahlström, Mathias, Heinstedt, Elin January 2002 (has links)
This project has been devoted to design a computer system with a tangible user interface, in the context of future supervision of remote drop-in dialysis patients. The tangible computer system was developed as an example of how two concepts in human work, accountability and awareness, can be supported through tangible user interfaces. A current trend within CSCW discusses accountability in design in terms of how software should make its own actions accountable. We choose to use an alternative route, namely to use the tangible interface for explicating nurses and patients actions for each other. Explicating actions is key benefit with a tangible interface in work environments that is physical co-located. We conclude that our strategy can be investigated further in settings where the work is carried out in a physical co-located space.
82

An Introduction and Evaluation of a Lossless Fuzzy Binary AND/OR Compressor / En introduktion och utvärdering av ett Lossless Fuzzy binär och / eller kompressor

Alipour, Philip Baback, Ali, Muhammad January 2010 (has links)
We report a new lossless data compression algorithm (LDC) for implementing predictably-fixed compression values. The fuzzy binary and-or algorithm (FBAR), primarily aims to introduce a new model for regular and superdense coding in classical and quantum information theory. Classical coding on x86 machines would not suffice techniques for maximum LDCs generating fixed values of Cr >= 2:1. However, the current model is evaluated to serve multidimensional LDCs with fixed value generations, contrasting the popular methods used in probabilistic LDCs, such as Shannon entropy. The currently introduced entropy is of ‘fuzzy binary’ in a 4D hypercube bit flag model, with a product value of at least 50% compression. We have implemented the compression and simulated the decompression phase for lossless versions of FBAR logic. We further compared our algorithm with the results obtained by other compressors. Our statistical test shows that, the presented algorithm mutably and significantly competes with other LDC algorithms on both, temporal and spatial factors of compression. The current algorithm is a steppingstone to quantum information models solving complex negative entropies, giving double-efficient LDCs > 87.5% space savings. / Vi rapporterar en ny förlustfri komprimering algoritm (MUL) för att genomföra förutsägbart-fast komprimering värden. Den luddiga binär och-eller algoritm (FBAR), syftar bland annat att införa en ny modell för regelbunden och superdense kodning i klassiska och kvantmekaniska information teori. Klassiska kodning på x86-maskiner inte skulle räcka teknik för maximal LDC att skapa fasta värden av Cr >= 2:1. Men den nuvarande modellen utvärderas för att tjäna flerdimensionella LDC med fast värde generationer, där de populära metoder som används i probabilistiska LDC, såsom Shannon entropi. De närvarande in entropi är av "fuzzy binära" i en 4D blixtkub lite flagga modell, med en produkt värde av minst 50% komprimering. Vi har genomfört komprimering och simulerade den tryckfall fasen för förlustfri versioner av FBAR logik. Jämförde vi ytterligare vår algoritm med de resultat som andra kompressorer. Vår statistiska testet visar att den presenterade algoritmen mutably och betydligt konkurrerar med andra LDC algoritmer på båda, tidsmässiga och geografiska faktorer av kompression. Den nuvarande algoritmen är en steppingstone att kvantinformationsteknik modeller lösa komplexa negativa entropies, vilket ger dubbel-effektiva LDC> 87,5 besparingar utrymme. / +46 455 38 50 00
83

Ingénierie des centres colorés dans SiC pour la photonique et la solotronique / Engineering of color centers in SiC for photonics and solotronics

Al Atem, Abdul Salam 29 November 2018 (has links)
Les défauts ponctuels dans les semi-conducteurs sont étudiés pour la réalisation de bits quantiques d’information (Qubit). A ce jour, le système le plus développé est le centre NV dans le diamant. Récemment, les défauts ponctuels du carbure de silicium (SiC) ont été identifiés comme prometteurs pour la réalisation de Qubit en raison de leur long temps de cohérence de spin et du fonctionnement à température ambiante. Dans ce contexte, nous étudions la formation, la caractérisation optique et magnétique des défauts ponctuels dans SiC, ainsi que l’amélioration de la collection de leur luminescence. Nous commençons par une description des différents critères qui font du SiC un matériau clé pour les applications Qubit. Ensuite, nous présentons une étude bibliographique sur les principaux défauts ponctuels dans SiC en nous focalisant sur les centres : VSi, VSiVC, NV. Nous portons par la suite notre étude sur les conditions optimales d’irradiation ioniques/électroniques et de recuit post-irradiation pour la formation de défauts ponctuels luminescents dans le polytype cubique de SiC. Nous avons identifié les différents types de défauts dans le visible. Dans l’infra-rouge, nous n’avons détecté que le centre VSiVC en trouvant les conditions optimales de sa luminescence dans le cas d’implantation par les protons (dose 1016 cm-2 et le recuit à 750 °C). Puis, nous avons comparé les résultats obtenus par des irradiations aux électrons à ceux obtenus avec les protons en précisant les différents types de défauts ponctuels détectés par deux méthodes: la photoluminescence et la résonance paramagnétique électronique. Enfin, nous avons développé un processus technologique qui consiste en la fabrication de nano-piliers en SiC-4H. Nous avons montré les avantages de leur réalisation sur l’efficacité de la collection de PL des défauts ponctuels comme VSi et VSiVC. Une amélioration d’un facteur 25 pour le centre VSi et d’un facteur 50 pour le centre VSiVC a été obtenue. / Point defects in semiconductor materials are studied for the realization of quantum information bits (Qubit). Nowadays, the most developed system is based on the NV center in diamond. Recently, point defects in silicon carbide (SiC) have been identified as promising for the realization of Qubit due to the combination of their long spin coherence time and room temperature operation. In this context, this thesis studies the formation, optical and magnetic characterization of point defects in SiC, as well as the improvement of their luminescence collection. We begin with a general introduction to SiC in which we describe the different criteria that make SiC a key material for Qubit applications. Next, we present a bibliographical study on the main point defects in SiC, focusing on the centers: VSi, VSiVC, NV. We have studied the optimal conditions of ionic/electronic irradiation and post-irradiation annealing for the formation of luminescent point defects in the cubic polytype of SiC. We have identified the different types of visible range defects. In the infra-red range, we detected only the Ky5 center (VSiVC) by finding the optimal luminescence conditions of this center in the case of implantation by protons (dose 1016 cm-2 and annealing at 750 °C). Then, we compared the results obtained by electron irradiations with those obtained with protons specifying the different types of point defects detected by two methods: photoluminescence and electronic paramagnetic resonance. Finally, we have developed a technological process that consists of nano-pillars fabrication in SiC-4H. We have shown the advantages of realizing these pillars on the efficiency of the PL collection of point defects like VSi and VSiVC : an improvement of a factor of 25 for the VSi center and a factor of 50 for the VSiVC center was obtained.
84

Mapeamento de bits para adaptação rápida a variações de canal de sistemas QAM codificados com LDPC

CORRÊA, Fernanda Regina Smith Neves 29 September 2017 (has links)
Submitted by Carmen Torres (carmensct@globo.com) on 2018-02-09T18:11:30Z No. of bitstreams: 2 license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Tese_MapeamentoBitsAdaptacao.pdf: 986310 bytes, checksum: 6e1b30f6ca34fc67df43f3141680c73a (MD5) / Approved for entry into archive by Edisangela Bastos (edisangela@ufpa.br) on 2018-02-16T16:12:49Z (GMT) No. of bitstreams: 2 license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Tese_MapeamentoBitsAdaptacao.pdf: 986310 bytes, checksum: 6e1b30f6ca34fc67df43f3141680c73a (MD5) / Made available in DSpace on 2018-02-16T16:12:49Z (GMT). No. of bitstreams: 2 license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Tese_MapeamentoBitsAdaptacao.pdf: 986310 bytes, checksum: 6e1b30f6ca34fc67df43f3141680c73a (MD5) Previous issue date: 2017-09-29 / CNPq - Conselho Nacional de Desenvolvimento Científico e Tecnológico / Os codigos com matriz de vericação de paridade de baixa densidade (LDPC) tem sido adotados como estrategia de correção de erros em diversos padrões de sistemas de comunicação, como nos sistemas G.hn (padrão que unifica as redes domesticas) e IEEE 802.11n (padrão para redes sem o locais). Nestes sistemas com modulação de amplitude em quadratura (QAM) codicados com LDPC, mapear propriamente os bits codificados para os diferentes sub-canais, considerando o fato de os sub-canais terem diferentes qualidades, garante uma melhora no desempenho geral do sistema. Nesse sentido, esta Tese apresenta uma nova técnica de mapeamento de bits, baseada na suposição de que bits transmitidos em sub-canais \bons" ajudam bits transmitidos em sub-canais \ruins". Isto e possível através de algumas restrições impostas ao grafo de Tanner associado, semelhantes aos códigos Root-LDPC. A otimização deste mapeamento de bits utilizando curvas de transferência de informação extrínseca (EXIT charts) também e apresentada. Observa-se que esse mapeamento tem a vantagem de um espaço de busca de otimização reduzido quando aplicado ao sistema com modo de transmissão de portadora única. Além disso, em situações nas quais o espaço de busca não e tão reduzido, como em aplicações baseadas em multiplexação por divisão de frequência ortogonal (OFDM), chegou-se a uma simples regra pratica associada as restrições do mapeamento de bits que praticamente elimina a necessidade de uma otimização. Por fim, um estudo do impacto do nível de desequilíbrio de contabilidade através dos sub-canais sobre o desempenho do mapeamento de bits e apresentado. Os resultados das simulações mostram que a estratégia de mapeamento de bits melhora o desempenho do sistema, e que, na presença de variações do canal, o sistema pode, adaptativamente, aplicar um novo mapeamento de bits sem a necessidade de recorrer a uma otimização complexa, podendo ser muito útil em sistemas práticos. / Low-Density parity-check (LDPC) codes are being adopted as the error correction strategy in di erent system standards, such as the G.hn (home networking standard) and the IEEE 802.11n (wireless local standard). In these LDPC-coded quadrature amplitude modulation (QAM) systems, mapping the LDPC coded bits properly to the di erent sub-channels considering the fact that sub-channels have di erent qualities ensures an improved overall system performance. Accordingly, this thesis presents a new bit mapping technique based on the assumption that bits transmitted in \good" sub-channels, help bits transmitted in \bad" sub-channels. This can be made possible through some restrictions to be imposed on the associated Tanner graph, akin to Root-LDPC codes. An optimization of the root-like bit mapping through extrinsic information transfer (EXIT) charts analysis is also presented. We show that this mapping has the advantage of a reduced optimization search space when applied to single-carrier based systems. Moreover, in situations where the search space is not só reduced, such as in orthogonal frequency division multiplexing (OFDM)-based applications, we arrive at a rule of thumb associated with the bit mapping constraints that practically eliminates the need for an optimization. Finally, a study of the impact of the level of reliability imbalance across the sub-channels on the performance of the root-like bit mapping is presented. Simulation results show that the new bit mapping strategy improves performance, and that in the presence of channel variations, the system can, adaptively, apply a new bit mapping without the need of a complex optimization, which can be very useful in practical systems.
85

Portas lógicas totalmente ópticas baseado em interferômetro de Michelson com amplificador óptico semicondutor / Totally optical logic doors based on Michelson interferometer with semiconductor optical amplifier

OLIVEIRA, Jackson Moreira 24 August 2018 (has links)
Submitted by Luciclea Silva (luci@ufpa.br) on 2018-10-04T13:19:13Z No. of bitstreams: 2 license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Dissertacao_Portaslogicastotalmente.pdf: 3765525 bytes, checksum: 00ab972c28d64b7d082664048cbab9c8 (MD5) / Approved for entry into archive by Luciclea Silva (luci@ufpa.br) on 2018-10-04T13:19:56Z (GMT) No. of bitstreams: 2 license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Dissertacao_Portaslogicastotalmente.pdf: 3765525 bytes, checksum: 00ab972c28d64b7d082664048cbab9c8 (MD5) / Made available in DSpace on 2018-10-04T13:19:56Z (GMT). No. of bitstreams: 2 license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Dissertacao_Portaslogicastotalmente.pdf: 3765525 bytes, checksum: 00ab972c28d64b7d082664048cbab9c8 (MD5) Previous issue date: 2018-08-24 / Neste trabalho, propõe-se a estrutura do dispositivo de portas lógicas totalmente ópticas baseada em um interferômetro Michelson (MI) composto de amplificador óptico semicondutor (SOA) formando uma estrutura de dispositivos lógicos SOA-MI com grade de Bragg de Fibra (FBG) simetricamente idêntica na saída de cada um de seus braços, para simulação numérica das portas lógicas AND, OR e NOR de dois sinais de entrada binários com diferentes números de bit usando a técnica de modulação de ganho cruzado (XGM) a 10 Gb/s de taxa de bits limita a largura de banda em 10, 20 e 40 GHz usando o software OptiSystem 15.0 da OptiWave Corporation para demonstrar e extrair regras simples de projeto para processamento óptico de alta velocidade e análise de propriedades não lineares induzidas por SOA. Além disso, este trabalho inclui o estudo do efeito da largura de banda e número de bits na potência recebida, taxa de erro mínimo (BER), fator máximo de qualidade (fator-Q), relação sinal-ruído óptico (OSNR) e espectro óptico, que demonstra portas de alta velocidade e desempenho. Executou-se as portas lógicas baseadas em SOA-MI com alguns parâmetros e os resultados podem demonstrar uma estrutura de dispositivos lógicos ópticos de alto desempenho com alta velocidade. / In this work, proposes all-optical logic gates device structure based on a Michelson interferometer (MI) composed of semiconductor optical amplifier (SOA) forming a structure of SOA-MI logic devices with symmetrically identical Fiber Bragg Grating (FBG) at the output of each of its arms, for numerical simulation of the all-optical AND, OR and NOR logic gates of two binary input signals with different bit numbers using the cross-gain modulation (XGM) technique at 10 Gb/s bit rate and filter bandwidth at 10, 20 and 40 GHz using OptiSystem 15.0 software by OptiWave Corporation, to demonstrate and extract simple design rules for high-speed optical processing and analysis of non-linear SOA-induced properties. In addition, this work includes the study of the effect of bandwidth and number of bits on received power, minimum bit error rate (BER), maximum quality factor (Q-factor), Optical Signal to Noise Ratio (OSNR) and optical spectrum, which demonstrates high-speed gates and performance. Were run the SOA-MI-based logic gates with some parameters and the results can demonstrate a structure of high-performance optical logic devices with high speed. / IFPA - Instituto Federal de Educação, Ciência e Tecnologia do Pará
86

Interprétation et amélioration d'une procédure de démodulation itérative

Naja, Ziad 01 April 2010 (has links) (PDF)
La géométrie de l'information est la théorie mathématique qui applique les méthodes de la géométrie différentielle dans le domaine des statistiques et de la théorie de l'information. C'est une technique très prometteuse pour l'analyse et l'illustration des algorithmes itératifs utilisés en communications numériques. Cette thèse porte sur l'application de cette technique ainsi que d'autre technique d'optimisation bien connue, l'algorithme itératif du point proximal, sur les algorithmes itératifs en général. Nous avons ainsi trouvé des interprétations géométriques (basée sur la géométrie de l'information) et proximales (basée sur l'algorithme du point proximal) intéressantes dans le cas d'un algorithme itératif de calcul de la capacité des canaux discrets sans mémoire, l'algorithme de Blahut-Arimoto. L'idée étant d'étendre cette application sur une classe d'algorithmes itératifs plus complexes. Nous avons ainsi choisi d'analyser l'algorithme de décodage itératif des modulations codées à bits entrelacés afin de trouver quelques interprétations et essayer de proposer des liens existant avec le critère optimal de maximum de vraisemblance et d'autres algorithmes bien connus dans le but d'apporter certaines améliorations par rapport au cas classique de cet algorithme, en particulier l'étude de la convergence.
87

Potential alternative sources of funding South Africa’s land redistribution programme in its agricultural sector

Britain-Renecke, Cézanne January 2011 (has links)
No description available.
88

Potential alternative sources of funding South Africa’s land redistribution programme in its agricultural sector

Britain-Renecke, Cézanne January 2011 (has links)
No description available.
89

Estudo comparativo do comportamento entre brocas alargadoras e processo de alargamento na usinagem do ferro fundido cinzento GG30

Lobo, Luciano Jairo 28 May 2015 (has links)
O processo convencional de furação é um dos processos de usinagem empregado em larga escala, normalmente utilizado em operações com menor responsabilidade na qualidade superficial dos furos usinados, atingindo perfis de rugosidade na ordem de 6,3 Ra. Para processos mais refinados, que atinjam 0,8 Ra, por exemplo, opta-se por processos de alargamento, mandrilamento, entre outros, que são mais onerosos em relação aos custos com ferramentas de corte e, principalmente, o tempo de operação. Na tentativa de reduzir o tempo de usinagem melhorando a qualidade dos furos obtidos nestes processos, fabricantes de ferramentas vêm desenvolvendo e aprimorando geometrias capazes de conjugar as operações de furação com operações de alargamento, obtendo resultados expressivos do ponto de vista de produtividade. As brocas alargadoras, entre outras características, são fabricadas com maior número de gumes de corte, chegando a oito de acordo com o diâmetro da ferramenta. Os canais principais têm a função de desbaste e alta remoção de cavaco, enquanto os outros canais atuam de forma a reduzir a rugosidade, removendo pouco material e proporcionando maior estabilidade para a ferramenta durante a usinagem. O objetivo desse trabalho é estudar o comportamento do processo de usinagem por brocas alargadoras de quatro e seis cortes na obtenção de furos com valores máximos de rugosidade até 0,8 Ra no material ferro fundido GG30, comparando os resultados com o processo de alargamento convencional. Os resultados preliminares demonstraram que, utilizando as brocas alargadoras, consegue-se redução superior a 30% no tempo de usinagem, quando comparado ao processo convencional de alargamento, mantendo valores de rugosidade na ordem de 0,8 Ra, unindo a velocidade de um processo convencional de furação com a qualidade superficial dos furos obtidos nos processos convencionais de alargamento. / The conventional drilling process is the most used among all machining processes. It is usually applied for holes with lower quality surface requirement, with roughness around 6.3 Ra. Higher quality drilling process, with 0,8 Ra, use other drilling process such as, reaming, boring, among others. These processes are more expensive because of the cost of cutting tools and especially because of the operational time. Aiming to reducing the machining time and improve the holes surface quality, tool manufacturers have been developing and improving geometries able to combine the drilling operations with reaming operations, optimizing the productivity. The drill reamers, among other characteristics, are manufactured with the highest number of cutting edges, usually with eight edges based on tool’s diameter. The main channel have high trimming functions and high chip removal, while the other channel reduce the surface roughness, removing less material and providing greater tools stability during the machining process. The aim of this paper is to study the behavior of the machining process through drill reamers with four and six cutting edges, taken into consideration holes with roughness up to o,8 Ra in the material cast iron GG30, comparing this performance with conventional reamers’ process. The preliminary results has indicated that by using drill reamers, over 30% of the machining time can be reduced comparing to conventional reamers’ process, keeping roughness values around 0,8Ra, merging the speed of conventional drilling’ process with the surface quality of the holes obtained in conventional reamers’ processes.
90

Experimentos Computacionais com ImplementaÃÃes de Conjunto por EndereÃamento Direto e o Problema de Conjunto Independente MÃximo / Computational Experiments with Set Implementations by Direct Addressing and the Maximum Independent Set Problem

Marcio Costa Santos 13 September 2013 (has links)
CoordenaÃÃo de AperfeiÃoamento de Pessoal de NÃvel Superior / A utilizaÃÃo de vetores de bits à prÃtica corrente na representaÃÃo de conjuntos por endereÃamento direto com o intuito de reduzir o espaÃo de memÃria necessÃrio e melhorar o desempenho de aplicaÃÃes com uso de tÃcnicas de paralelismo em bits. Nesta dissertaÃÃo, examinamos implementaÃÃes para representaÃÃo de conjuntos por endereÃamento direto. A estrutura bÃsica nessas implementaÃÃes à o vetor de bits. No entanto, alÃm dessa estrutura bÃsica, implementamos tambÃm duas variaÃÃes. A primeira delas consiste em uma estratificaÃÃo de vetores de bits, enquanto a segunda emprega uma tabela de dispersÃo. As operaÃÃes associadas Ãs estruturas implementadas sÃo a inclusÃo ou remoÃÃo de um elemento do conjunto e a uniÃo ou interseÃÃo de dois conjuntos. Especial atenÃÃo à dada ao uso de paralelismo em bits nessas operaÃÃes. As implementaÃÃes das diferentes estruturas nesta dissertaÃÃo utilizam uma interface e uma implementaÃÃo abstrata comuns, nas quais as operaÃÃes sÃo especificadas e o paralelismo em bits à explorado. A diferenÃa entre as implementaÃÃes està apenas na estrutura utilizada. Uma comparaÃÃo experimental à realizada entre as diferentes estruturas utilizando algoritmos enumerativos para o problema de conjunto independente mÃximo. Duas abordagens sÃo utilizadas na implementaÃÃo de algoritmos enumerativos para o problema de conjunto independente mÃximo, ambas explorando o potencial de paralelismo em bits na representaÃÃo do grafo e na operaÃÃo sobre subconjuntos de vÃrtices. A primeira delas à um algoritmo do tipo {em branch-and-boound} proposto na literatura e a segunda emprega o mÃtodo das bonecas russas. Em ambos os casos, o uso de paralelismo em bits proporciona ganhos de eficiÃncia quando empregado no cÃlculo de limites inferiores baseados em cobertura por cliques. Resultados de experimentos computacionais sÃo apresentados como forma de comparaÃÃo entre os dois algoritmos e como forma de avaliaÃÃo das estruturas implementadas. Esses resultados permitem concluir que o algoritmo baseado no mÃtodo das bonecas russas à mais eficiente quanto ao tempo de execuÃÃo e quanto ao consumo de memÃria. AlÃm disso, os resultados experimentais mostram tambÃm que o uso de estratificaÃÃo e tabelas de dispersÃo permitem ainda maior eficiÃncia no caso de grafos com muito vÃrtices e poucas arestas. / The use of bit vectors is a usual practice for represent sets by direct addressing with the aim of reduce memory consumed and improve efficiency of applications with the use of bit parallel techniques. In this text, we study implementations for represent sets by direct addressed. The basic structure in this implementations is the bit vector. Besides that basic implementation, we implement two variations also. The first one is a stratification of the bit vector, while the second uses a hash table. The operations linked to the implemented structure are include and remove an element and the union and intersection of two sets. Especial attention is given to the use of bit parallel in this condition. The implementation of the different structures in this work use an base interface and a base abstract class, where the operations are defined and the bit parallel is used. An experimental comparative between this structures is carry out using enumerative algorithms for the maximum stable set problem. Two approaches are used in the implementation of the enumerative algorithms for the maximum stable set problem, both using the bit parallel in the representation of the graph and on the operations with subsets of vertices. The first one is a known branch-and-bound algorithm and the second uses the Russian dolls method. In both cases, the use of bit parallel improve efficiency when the lower bounds are calculated based in a clique cover of the vertices. The results of computational experiments are presented as comparison between the two algorithms and as an assessment of the structures implemented. These results show that the algorithm based on the method Russian Dolls is more efficient regarding runtime and the memory consumed. Furthermore, the experimental results also show that the use stratification and hash tables also allow more efficiency in the case of sparse graphs.

Page generated in 0.1136 seconds