• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 11
  • 1
  • Tagged with
  • 16
  • 16
  • 7
  • 5
  • 5
  • 5
  • 5
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Performance analysis of energy detector over different generalised wireless channels based spectrum sensing in cognitive radio

Al-Hmood, Hussien January 2015 (has links)
This thesis extensively analyses the performance of an energy detector which is widely employed to perform spectrum sensing in cognitive radio over different generalised channel models. In this analysis, both the average probability of detection and the average area under the receiver operating characteristic curve (AUC) are derived using the probability density function of the received instantaneous signal to noise ratio (SNR). The performance of energy detector over an ŋ --- µ fading, which is used to model the Non-line-of-sight (NLoS) communication scenarios is provided. Then, the behaviour of the energy detector over к --- µ shadowed fading channel, which is a composite of generalized multipath/shadowing fading channel to model the lineof- sight (LoS) communication medium is investigated. The analysis of the energy detector over both ŋ --- µ and к --- µ shadowed fading channels are then extended to include maximal ratio combining (MRC), square law combining (SLC) and square law selection (SLS) with independent and non-identically (i:n:d) diversity branches. To overcome the problem of mathematical intractability in analysing the energy detector over i:n:d composite fading channels with MRC and selection combining (SC), two different unified statistical properties models for the sum and the maximum of mixture gamma (MG) variates are derived. The first model is limited by the value of the shadowing severity index, which should be an integer number and has been employed to study the performance of energy detector over composite α --- µ /gamma fading channel. This channel is proposed to represent the non-linear prorogation environment. On the other side, the second model is general and has been utilised to analyse the behaviour of energy detector over composite ŋ --- µ /gamma fading channel. Finally, a special filter-bank transform which is called slantlet packet transform (SPT) is developed and used to estimate the uncertain noise power. Moreover, signal denoising based on hybrid slantlet transform (HST) is employed to reduce the noise impact on the performance of energy detector. The combined SPT-HST approach improves the detection capability of energy detector with 97% and reduces the total computational complexity by nearly 19% in comparison with previously implemented work using filter-bank transforms. The aforementioned percentages are measured at specific SNR, number of selected samples and levels of signal decomposition.
2

UWB communication systems acquisition at symbol rate sampling for IEEE standard channel models

Cheng, Xia 29 March 2007
For ultra-wideband (UWB) communications, acquisition is challenging. The reason is from the ultra short pulse shape and ultra dense multipath interference. Ultra short pulse indicates the acquisition region is very narrow. Sampling is another challenge for UWB design due to the need for ultra high speed analog-to digital converter.<p>A sub-optimum and under-sampling scheme using pilot codes as transmitted reference is proposed here for acquisition. The sampling rate for the receiver is at the symbol rate. A new architecture, the reference aided matched filter is studied in this project. The reference aided matched filter method avoids using complex rake receiver to estimate channel parameters and high sampling rate for interpolation. A limited number of matched filters are used as a filter bank to search for the strongest path. Timing offset for acquisition is then estimated and passed to an advanced verification algorithm. For optimum performance of acquisition, the adaptive post detection integration is proposed to solve the problem from dense inter-symbol interference during the acquisition. A low-complex early-late gate tracking loop is one element of the adaptive post detection integration. This tracking scheme assists in improving acquisition accuracy. The proposed scheme is evaluated using Matlab Simulink simulations in term of mean acquisition time, system performance and false alarm. Simulation results show proposed algorithm is very effective in ultra dense multipath channels. This research proves reference aided acquisition with tracking loop is promising in UWB application.
3

UWB communication systems acquisition at symbol rate sampling for IEEE standard channel models

Cheng, Xia 29 March 2007 (has links)
For ultra-wideband (UWB) communications, acquisition is challenging. The reason is from the ultra short pulse shape and ultra dense multipath interference. Ultra short pulse indicates the acquisition region is very narrow. Sampling is another challenge for UWB design due to the need for ultra high speed analog-to digital converter.<p>A sub-optimum and under-sampling scheme using pilot codes as transmitted reference is proposed here for acquisition. The sampling rate for the receiver is at the symbol rate. A new architecture, the reference aided matched filter is studied in this project. The reference aided matched filter method avoids using complex rake receiver to estimate channel parameters and high sampling rate for interpolation. A limited number of matched filters are used as a filter bank to search for the strongest path. Timing offset for acquisition is then estimated and passed to an advanced verification algorithm. For optimum performance of acquisition, the adaptive post detection integration is proposed to solve the problem from dense inter-symbol interference during the acquisition. A low-complex early-late gate tracking loop is one element of the adaptive post detection integration. This tracking scheme assists in improving acquisition accuracy. The proposed scheme is evaluated using Matlab Simulink simulations in term of mean acquisition time, system performance and false alarm. Simulation results show proposed algorithm is very effective in ultra dense multipath channels. This research proves reference aided acquisition with tracking loop is promising in UWB application.
4

Wireless Channel Characterization for Large Indoor Environments at 5 GHz

Sakarai, Deesha S. 26 July 2012 (has links)
No description available.
5

Performance Analysis of Space-Time Coded Modulation Techniques using GBSB-MIMO Channel Models

Nory, Ravikiran 06 June 2002 (has links)
Wireless systems are rapidly developing to provide high speed voice, text and multimedia messaging services which were traditionally offered by wire line networks. To support these services, channels with large capacities are required. Information theoretic investigations have shown that Multiple Input Multiple Output (MIMO) channels can achieve very high capacities. Space-Time Block Coding (STBC) and Bell Labs Layered Space-Time Architecture (BLAST) are two potential schemes which utilize the diversity offered by MIMO channels to provide reliable high date rate wireless communication. This work studies the sensitivity of these two schemes to spatial correlation in MIMO channels. The first part of the thesis studies the effect of spatial correlation on the performance of STBC by using Geometrically Based Single Bounce MIMO (GBSB-MIMO) channel models. Performance is analyzed for two scenarios: one without scatterers in the vicinity of the transmitter and other with scatterers. In the second part of the thesis, the sensitivity of BLAST to spatial correlation is analyzed. Later, schemes which use the principles of Multilayered Space-Time Coded Modulation to combine the benefits of BLAST and STBC are introduced and their performance is investigated in correlated and uncorrelated Rayleigh fading. Results indicate that schemes using orthogonal design space-time block codes are reasonably robust to spatial correlation while schemes like BLAST are very sensitive as they depend on array processing to separate signals from various transmit antennas. / Master of Science
6

Terrestrial radio wave propagation at millimeter-wave frequencies

Xu, Hao 05 May 2000 (has links)
This research focuses on radio wave propagation at millimeter-wave frequencies. A measurement based channel characterization approach is taken in the investigation. First, measurement techniques are analyzed. Three types of measurement systems are designed, and implemented in measurement campaigns: a narrowband measurement system, a wideband measurement system based on Vector Network Analyzer, and sliding correlator systems at 5.8+AH4AXA-mbox{GHz}, 38+AH4AXA-mbox{GHz} and 60+AH4AXA-mbox{GHz}. The performances of these measurement systems are carefully compared both analytically and experimentally. Next, radio wave propagation research is performed at 38+AH4AXA-mbox{GHz} for Local Multipoint Distribution Services (LMDS). Wideband measurements are taken on three cross-campus links at Virginia Tech. The goal is to determine weather effects on the wideband channel properties. The measurement results include multipath dispersion, short-term variation and signal attenuation under different weather conditions. A design technique is developed to estimate multipath characteristics based on antenna patterns and site-specific information. Finally, indoor propagation channels at 60+AH4AXA-mbox{GHz} are studied for Next Generation Internet (NGI) applications. The research mainly focuses on the characterization of space-time channel structure. Multipath components are resolved both in time of arrival (TOA) and angle of arrival (AOA). Results show an excellent correlation between the propagation environments and the channel multipath structure. The measurement results and models provide not only guidelines for wireless system design and installation, but also great insights in millimeter-wave propagation. / Ph. D.
7

A noisy-channel based model to recognize words in eye typing systems / Um modelo baseado em canal de ruído para reconhecer palavras digitadas com os olhos

Hanada, Raíza Tamae Sarkis 04 April 2018 (has links)
An important issue with eye-based typing iis the correct identification of both whrn the userselects a key and which key is selected. Traditional solutions are based on predefined gaze fixation time, known as dwell-time methods. In an attempt to improve accuracy long dwell times are adopted, which un turn lead to fatigue and longer response limes. These problems motivate the proposal of methods free of dwell-time, or with very short ones, which rely on more robust recognition techniques to reduce the uncertainty about user\'s actions. These techniques are specially important when the users have disabilities which affect their eye movements or use inexpensive eye trackers. An approach to deal with the recognition problem is to treat it as a spelling correction task. An usual strategy for spelling correction is to model the problem as the transmission of a word through a noisy-channel, such that it is necessary to determine which known word of a lexicon is the received string. A feasible application of this method requires the reduction of the set of candidate words by choosing only the ones that can be transformed into the imput by applying up to k character edit operations. This idea works well on traditional typing because the number of errors per word is very small. However, this is not the case for eye-based typing systems, which are much noiser. In such a scenario, spelling correction strategies do not scale well as they grow exponentially with k and the lexicon size. Moreover, the error distribution in eye typing is different, with much more insertion errors due to specific sources, of noise such as the eye tracker device, particular user behaviors, and intrinsic chracteeristics of eye movements. Also, the lack of a large corpus of errors makes it hard to adopt probabilistic approaches based on information extracted from real world data. To address all these problems, we propose an effective recognition approach by combining estimates extracted from general error corpora with domain-specific knowledge about eye-based input. The technique is ablçe to calculate edit disyances effectively by using a Mor-Fraenkel index, searchable using a minimun prfect hashing. The method allows the early processing of most promising candidates, such that fast pruned searches present negligible loss in word ranking quality. We also propose a linear heuristic for estimating edit-based distances which take advantage of information already provided by the index. Finally, we extend our recognition model to include the variability of the eye movements as source of errors, provide a comprehensive study about the importance of the noise model when combined with a language model and determine how it affects the user behaviour while she is typing. As result, we obtain a method very effective on the task of recognizing words and fast enough to be use in real eye typing systems. In a transcription experiment with 8 users, they archived 17.46 words per minute using proposed model, a gain of 11.3% over a state-of-the-art eye-typing system. The method was particularly userful in more noisier situations, such as the first use sessions. Despite significant gains in typing speed and word recognition ability, we were not able to find statistically significant differences on the participants\' perception about their expeience with both methods. This indicates that an improved suggestion ranking may not be clearly perceptible by the users even when it enhances their performance. / Um problema importante em sistemas de digitação com os olhos é a correta identificação tanto de quando uma letra é selecionada como de qual letra foi selecionada pelo usuário. As soluções tradicionais para este problema são baseadas na verificação de quanto tempo o olho permanece retido em um alvo. Se ele fica por um certo limite de tempo, a seleção é reconhecida. Métodos em que usam esta ideia são conhecidos como baseados em tempo de retenção (dwell time). É comum que tais métodos, com intuito de melhorar a precisão, adotem tempos de retenção alto. Isso, por outro lado, leva à fadiga e tempos de resposta altos. Estes problemas motivaram a proposta de métodos não baseados em tempos de retenção reduzidos, que dependem de técnicas mais robustas de reconhecimento para inferir as ações dos usuários. Tais estratégias são particularmente mais importantes quando o usuário tem desabilidades que afetam o movimento dos olhos ou usam dispositivos de rastreamento ocular (eye-trackers) muito baratos e, portanto, imprecisos. Uma forma de lidar com o problema de reconhecimento das ações dos usuários é tratá-lo como correção ortográfica. Métodos comuns para correção ortográfica consistem em modelá-lo como a transmissão de uma palavra através de um canal de ruído, tal que é necessário determinar que palavra de um dicionário corresponde à string recebida. Para que a aplicação deste método seja viável, o conjunto de palavras candidatas é reduzido somente àquelas que podem ser transformadas na string de entrada pela aplicação de até k operações de edição de carácter. Esta ideia funciona bem em digitação tradicional porque o número de erros por palavra é pequeno. Contudo, este não é o caso de digitação com os olhos, onde há muito mais ruído. Em tal cenário, técnicas de correção de erros ortográficos não escalam pois seu custo cresce exponencialmente com k e o tamanho do dicionário. Além disso, a distribuição de erros neste cenário é diferente, com muito mais inserções incorretas devido a fontes específicas de ruído como o dispositivo de rastreamento ocular, certos comportamentos dos usuários e características intrínsecas dos movimentos dos olhos. O uso de técnicas probabilísticas baseadas na análise de logs de digitação também não é uma alternativa uma vez que não há corpora de dados grande o suficiente para tanto. Para lidar com todos estes problemas, propomos um método efetivo de reconhecimento que combina estimativas de corpus de erros gerais com conhecimento específico sobre fontes de erro encontradas em sistemas de digitação com os olhos. Nossa técnica é capaz de calcular distâncias de edição eficazmente usando um índice de Mor-Fraenkel em que buscas são feitas com auxílio de um hashing perfeito mínimo. O método possibilita o processamento ordenado de candidatos promissores, de forma que as operações de busca podem ser podadas sem que apresentem perda significativa na qualidade do ranking. Nós também propomos uma heurística linear para estimar distância de edição que tira proveito das informações já mantidas no índice, estendemos nosso modelo de reconhecimento para incluir erros vinculados à variabilidade decorrente dos movimentos oculares e fornecemos um estudo detalhado sobre a importância relativa dos modelos de ruído e de linguagem. Por fim, determinamos os efeitos do modelo no comportamento do usuário enquanto ele digita. Como resultado, obtivemos um método de reconhecimento muito eficaz e rápido o suficiente para ser usado em um sistema real. Em uma tarefa de transcrição com 8 usuários, eles alcançaram velocidade de 17.46 palavras por minuto usando o nosso modelo, o que corresponde a um ganho de 11,3% sobre um método do estado da arte. Nosso método se mostrou mais particularmente útil em situação onde há mais ruído, tal como a primeira sessão de uso. Apesar dos ganhos claros de velocidade de digitação, não encontramos diferenças estatisticamente significativas na percepção dos usuários sobre sua experiência com os dois métodos. Isto indica que uma melhoria no ranking de sugestões pode não ser claramente perceptível pelos usuários mesmo quanto ela afeta positivamente os seus desempenhos.
8

Systèmes coopératifs hybride Satellite-Terrestre : analyse de performance et dimensionnement du système / Hybrid Satellite-Terrestrial Cooperative Systems : Performance Analysis and System Dimensioning

Sreng, Sokchenda 11 December 2012 (has links)
Les systèmes de communications par satellite sont utilisés dans le contexte de la radiodiffusion, de la navigation, du sauvetage et du secours aux sinistrés, car ils permettent de fournir des services sur une large zone de couverture. Cependant, cette zone de couverture est limitée par l'effet de masquage provoqué par des obstacles qui bloquent la liaison directe entre le satellite et un utilisateur terrestre. L'effet de masquage devient plus sévère en cas de satellites à faibles angles d'élévation ou lorsque l'utilisateur est à l'intérieur. Pour résoudre ce problème, les Systèmes Coopératifs Hybride Satellite-Terrestre (HSTCS) ont été proposés. Dans un système HSTCS, l'utilisateur mobile peut profiter de la diversité spatiale en recevant des signaux à la fois du satellite et des relais terrestres. Les gap-fillers fixes ou mobiles sont utilisés pour relayer le signal satellite. La plupart des systèmes de diffusion par satellite utilisent les gap-fillers fixes alors que les gap-fillers mobiles sont nécessaires en cas de communications d'urgence lorsque l'infrastructure fixe n'est pas disponible. Dans les scénarios d'urgence (incendie, tremblement de terre, inondations, explosion) l'infrastructure terrestre existante est endommagée, donc les HSTCSs sont appropriés pour mettre à jour des informations qui permettent aux sauveteurs d'intervenir efficacement et en toute sécurité. En particulier, une mise en œuvre rapide et souple est nécessaire, ce qui pourrait être fourni par le déploiement de gap-fillers mobiles (véhicule ou portable). Plusieurs scénarios coopératifs et techniques de transmission ont déjà été proposés et étudiés. Cependant, la plupart des méthodes proposées ne fournissent qu'une analyse de performance fondée sur la simulation alors que les expressions analytiques de la probabilité de coupure et de la Probabilité d'Erreur Symbole (SEP) n'ont pas encore été établies. Cette thèse se focalise sur l'analyse de performances des systèmes HSTCS. La probabilité de coupure et SEP du système utilisant le schéma de transmission Selective Decode-and-Forward (SDF), avec ou sans sélection de relais, est évaluée dans le cas des modulations MPSK et MQAM. Cette expression analytique permet de concevoir le système HSTCS. Ces résultats sont applicables aux cas des relais fixes ou mobiles. La seconde partie de cette thèse est consacrée à des problèmes de synchronisation (décalage en temps et en fréquence ainsi que l'étalement Doppler). La mobilité des utilisateurs crée l'étalement Doppler qui détruit l'orthogonalité des sous-porteuses dans les signaux de type Orthogonal Frequency Division Multiplexing (OFDM). Cette perte d'orthogonalité engendre de l'interférence entre sous-porteuses (ICI) et donc une dégradation des performances du système en termes de SEP. Dans ce cas, on présente les conditions dans lesquelles cette dégradation peut être compensée par une augmentation du Rapport Signal sur Bruit (SNR) du côté de l'émetteur. Le résultat dépend du schéma de modulation et aussi de la vitesse des utilisateurs. / Satellite communication systems are used in the context of broadcasting, navigation, rescue, and disaster relief since they allow the provision of services over a wide coverage area. However, this coverage area is limited by the masking effect caused by obstacles that block the Line-Of-Sight (LOS) link between the satellite and a terrestrial user. The masking effect becomes more severe in case of low satellite elevation angles or when the user is indoor. To address this issue, Hybrid Satellite-Terrestrial Cooperative Systems (HSTCSs) have been proposed. In an HSTCS, the mobile user can exploit the diversity advantages by receiving signals from both satellite and terrestrial components. Fixed or mobile gap-fillers are used to relay the satellite signal. Most of satellites broadcasting systems have been implemented using fixed gap-fillers while mobile gap-fillers are needed in emergency cases when the fixed infrastructure is not available. In emergency scenarios (e.g., fire, earthquake, flood and explosion), the existing terrestrial infrastructure has been destroyed. So, an HSTCS is appropriate for transmitting the information between the rescuers and the central office. This allows the rescuers to operate efficiently. In particular, a fast and flexible implementation is needed and this could be provided by deploying mobile gap fillers (vehicle or mobile handheld). Recently, the topic of HSTCSs has gain interest in the research community. Several cooperative scenarios and transmission techniques have been proposed and studied. However, most of existing approaches only provide a performance analysis based on simulation results and the analytical expression of the exact Symbol Error Probability (SEP) is generally not provided. This dissertation focuses on the performance analysis of HSTCSs. The exact closed-form outage probability and SEP of Selective Decode-and-Forward (SDF) transmission scheme with and without relay selection are derived for both M-ary phase shift keying (MPSK) and M-ary quadrature amplitude modulation (MQAM) schemes. This analytical SEP helps in designing and dimensioning HSTCSs. Our results are applicable to both fixed and mobile relaying techniques. Another part of the dissertation is dedicated to synchronization issues (time, frequency shifting/spreading). The mobility of users induces a Doppler spread in the Orthogonal Frequency Division Multiplexing (OFDM) signal that destroys the orthogonality of subcarriers. The loss of orthogonality produces Inter-subCarrier Interference (ICI) and hence a degradation of the system performance in terms of SEP. In this case, we present the conditions in which this degradation can be compensated for by an increase in the Signal to Noise Ratio (SNR) at the transmitter side. The result depends on both the modulation scheme and the speed of the mobile users.
9

A noisy-channel based model to recognize words in eye typing systems / Um modelo baseado em canal de ruído para reconhecer palavras digitadas com os olhos

Raíza Tamae Sarkis Hanada 04 April 2018 (has links)
An important issue with eye-based typing iis the correct identification of both whrn the userselects a key and which key is selected. Traditional solutions are based on predefined gaze fixation time, known as dwell-time methods. In an attempt to improve accuracy long dwell times are adopted, which un turn lead to fatigue and longer response limes. These problems motivate the proposal of methods free of dwell-time, or with very short ones, which rely on more robust recognition techniques to reduce the uncertainty about user\'s actions. These techniques are specially important when the users have disabilities which affect their eye movements or use inexpensive eye trackers. An approach to deal with the recognition problem is to treat it as a spelling correction task. An usual strategy for spelling correction is to model the problem as the transmission of a word through a noisy-channel, such that it is necessary to determine which known word of a lexicon is the received string. A feasible application of this method requires the reduction of the set of candidate words by choosing only the ones that can be transformed into the imput by applying up to k character edit operations. This idea works well on traditional typing because the number of errors per word is very small. However, this is not the case for eye-based typing systems, which are much noiser. In such a scenario, spelling correction strategies do not scale well as they grow exponentially with k and the lexicon size. Moreover, the error distribution in eye typing is different, with much more insertion errors due to specific sources, of noise such as the eye tracker device, particular user behaviors, and intrinsic chracteeristics of eye movements. Also, the lack of a large corpus of errors makes it hard to adopt probabilistic approaches based on information extracted from real world data. To address all these problems, we propose an effective recognition approach by combining estimates extracted from general error corpora with domain-specific knowledge about eye-based input. The technique is ablçe to calculate edit disyances effectively by using a Mor-Fraenkel index, searchable using a minimun prfect hashing. The method allows the early processing of most promising candidates, such that fast pruned searches present negligible loss in word ranking quality. We also propose a linear heuristic for estimating edit-based distances which take advantage of information already provided by the index. Finally, we extend our recognition model to include the variability of the eye movements as source of errors, provide a comprehensive study about the importance of the noise model when combined with a language model and determine how it affects the user behaviour while she is typing. As result, we obtain a method very effective on the task of recognizing words and fast enough to be use in real eye typing systems. In a transcription experiment with 8 users, they archived 17.46 words per minute using proposed model, a gain of 11.3% over a state-of-the-art eye-typing system. The method was particularly userful in more noisier situations, such as the first use sessions. Despite significant gains in typing speed and word recognition ability, we were not able to find statistically significant differences on the participants\' perception about their expeience with both methods. This indicates that an improved suggestion ranking may not be clearly perceptible by the users even when it enhances their performance. / Um problema importante em sistemas de digitação com os olhos é a correta identificação tanto de quando uma letra é selecionada como de qual letra foi selecionada pelo usuário. As soluções tradicionais para este problema são baseadas na verificação de quanto tempo o olho permanece retido em um alvo. Se ele fica por um certo limite de tempo, a seleção é reconhecida. Métodos em que usam esta ideia são conhecidos como baseados em tempo de retenção (dwell time). É comum que tais métodos, com intuito de melhorar a precisão, adotem tempos de retenção alto. Isso, por outro lado, leva à fadiga e tempos de resposta altos. Estes problemas motivaram a proposta de métodos não baseados em tempos de retenção reduzidos, que dependem de técnicas mais robustas de reconhecimento para inferir as ações dos usuários. Tais estratégias são particularmente mais importantes quando o usuário tem desabilidades que afetam o movimento dos olhos ou usam dispositivos de rastreamento ocular (eye-trackers) muito baratos e, portanto, imprecisos. Uma forma de lidar com o problema de reconhecimento das ações dos usuários é tratá-lo como correção ortográfica. Métodos comuns para correção ortográfica consistem em modelá-lo como a transmissão de uma palavra através de um canal de ruído, tal que é necessário determinar que palavra de um dicionário corresponde à string recebida. Para que a aplicação deste método seja viável, o conjunto de palavras candidatas é reduzido somente àquelas que podem ser transformadas na string de entrada pela aplicação de até k operações de edição de carácter. Esta ideia funciona bem em digitação tradicional porque o número de erros por palavra é pequeno. Contudo, este não é o caso de digitação com os olhos, onde há muito mais ruído. Em tal cenário, técnicas de correção de erros ortográficos não escalam pois seu custo cresce exponencialmente com k e o tamanho do dicionário. Além disso, a distribuição de erros neste cenário é diferente, com muito mais inserções incorretas devido a fontes específicas de ruído como o dispositivo de rastreamento ocular, certos comportamentos dos usuários e características intrínsecas dos movimentos dos olhos. O uso de técnicas probabilísticas baseadas na análise de logs de digitação também não é uma alternativa uma vez que não há corpora de dados grande o suficiente para tanto. Para lidar com todos estes problemas, propomos um método efetivo de reconhecimento que combina estimativas de corpus de erros gerais com conhecimento específico sobre fontes de erro encontradas em sistemas de digitação com os olhos. Nossa técnica é capaz de calcular distâncias de edição eficazmente usando um índice de Mor-Fraenkel em que buscas são feitas com auxílio de um hashing perfeito mínimo. O método possibilita o processamento ordenado de candidatos promissores, de forma que as operações de busca podem ser podadas sem que apresentem perda significativa na qualidade do ranking. Nós também propomos uma heurística linear para estimar distância de edição que tira proveito das informações já mantidas no índice, estendemos nosso modelo de reconhecimento para incluir erros vinculados à variabilidade decorrente dos movimentos oculares e fornecemos um estudo detalhado sobre a importância relativa dos modelos de ruído e de linguagem. Por fim, determinamos os efeitos do modelo no comportamento do usuário enquanto ele digita. Como resultado, obtivemos um método de reconhecimento muito eficaz e rápido o suficiente para ser usado em um sistema real. Em uma tarefa de transcrição com 8 usuários, eles alcançaram velocidade de 17.46 palavras por minuto usando o nosso modelo, o que corresponde a um ganho de 11,3% sobre um método do estado da arte. Nosso método se mostrou mais particularmente útil em situação onde há mais ruído, tal como a primeira sessão de uso. Apesar dos ganhos claros de velocidade de digitação, não encontramos diferenças estatisticamente significativas na percepção dos usuários sobre sua experiência com os dois métodos. Isto indica que uma melhoria no ranking de sugestões pode não ser claramente perceptível pelos usuários mesmo quanto ela afeta positivamente os seus desempenhos.
10

Adaptive Error Control for Wireless Multimedia

Yankopolus, Andreas George 13 April 2004 (has links)
Future wireless networks will be required to support multimedia traffic in addition to traditional best-effort network services. Supporting multimedia traffic on wired networks presents a large number of design problems, particularly for networks that run connectionless data transport protocols such as the TCP/IP protocol suite. These problems are magnified for wireless links, as the quality of such links varies widely and uncontrollably. This dissertation presents new tools developed for the design and realization of wireless networks including, for the first time, analytical channel models for predicting the efficacy of error control codes, interleaving schemes, and signalling protocols, and several novel algorithms for matching and adapting system parameters (such as error control and frame length) to time-varying channels and Quality of Service (QoS) requirements.

Page generated in 0.0746 seconds