Spelling suggestions: "subject:"wordlength"" "subject:"word:length""
1 |
Speech Pause in People With Aphasia Across Word Length, Frequency, and Syntactic CategoryMitchell, Lana 14 June 2022 (has links)
This study is an examination of how a word’s syntactic category, word length, and usage frequency might impact a speaker’s use of communicative pause. Previously collected between and within utterance language samples from 21 people with aphasia (Harmon, 2018) were evaluated in this study. Participants consisted of 11 individuals diagnosed with mild or very mild aphasia and 10 individuals with moderate aphasia;15 who exhibited fluent subtypes and 6 non-fluent subtypes of aphasia. Data from the Corpus of Contemporary American English (COCA) was used to code the word frequency and syntactic category of each word in the language samples. Generally, speakers with both non-fluent and fluent aphasia produced more monosyllabic words of very high frequency with a greater percentage of function words than content words. Analyses revealed no significant correlations between the pause duration for either the word length or word frequency for either group of speakers. In relation to syntactic category, no significant differences in pause duration were found between content and function words in the between utterance condition. However, non-fluent speakers preceded content words with significantly shorter pause durations within utterances when compared with the function words. Due to differences in sample sizes between the speaker and syntactic groups, non-parametric statistics were used for some comparisons. In addition, this study does not fully account for the influence of fillers and incomplete words. Despite these limitations, this study will contribute to the research regarding communicative speech pause in speakers with aphasia and provide insight into more useful diagnostic and treatment strategies.
|
2 |
Multilattice Tilings and CoveringsLinnell, Joshua Randall 02 April 2021 (has links)
Let L be a discrete subgroup of \mathbb{R}^n under addition. Let D be a finite set of points including the origin. These two sets will define a multilattice of \mathbb{R}^n. We explore how to generate a periodic covering of the space \mathbb{R}^n based on L and $D$. Additionally, we explore the problem of covering when we restrict ourselves to covering \mathbb{R}^n using only dilations of the right regular simplex in our covering. We show that using a set D= {0,d} to define our multilattice the minimum covering density is 5-\sqrt{13}. Furthermore, we show that when we allow for an arbitrary number of displacements, we may get arbitrarily close to a covering density of 1.
|
3 |
Word length and the principle of least effort : language as an evolving, efficient code for information transferKanwal, Jasmeen Kaur January 2018 (has links)
In 1935 the linguist George Kingsley Zipf made a now classic observation about the relationship between a word's length and its frequency: the more frequent a word is, the shorter it tends to be. He claimed that this 'Law of Abbreviation' is a universal structural property of language. The Law of Abbreviation has since been documented in a wide range of human languages, and extended to animal communication systems and even computer programming languages. Zipf hypothesised that this universal design feature arises as a result of individuals optimising form-meaning mappings under competing pressures to communicate accurately but also efficiently - his famous Principle of Least Effort. In this thesis, I present a novel set of studies which provide direct experimental evidence for this explanatory hypothesis. Using a miniature artificial language learning paradigm, I show in Chapter 2 that language users optimise form-meaning mappings in line with the Law of Abbreviation only when pressures for accuracy and efficiency both operate during a communicative task. These results are robust across different methods of data collection: one version of the experiment was run in the lab, and another was run online, using a novel method I developed which allows participants to partake in dyadic interaction through a web-based interface. In Chapter 3, I address the growing body of work suggesting that a word's predictability in context may be an even stronger determiner of its length than its frequency alone. For instance, Piantadosi et al. (2011) show that shorter words have a lower average surprisal (i.e., tend to appear in more predictive contexts) than longer words, in synchronic corpora across many languages. We hypothesise that the same communicative pressures posited by the Principle of Least Effort, when acting on speakers in situations where context manipulates the information content of words, can give rise to these lexical distributions. Adapting the methodology developed in Chapter 2, I show that participants use shorter words in more predictive contexts only when subject to the competing pressures for accurate and efficient communication. In a second experiment, I show that participants are more likely to use shorter words for meanings with a lower average surprisal. These results suggest that communicative pressures acting on individuals during language use can lead to the re-mapping of a lexicon to align with 'Uniform Information Density', the principle that information content ought to be evenly spread across an utterance, such that shorter linguistic units carry less information than longer ones. Over generations, linguistic behaviour such as that observed in the experiments reported here may bring entire lexicons into alignment with the Law of Abbreviation and Uniform Information Density. For this to happen, a diachronic process which leads to permanent lexical change is necessary. However, crucial evidence for this process - decreasing word length as a result of increasing frequency over time - has never before been systematically documented in natural language. In Chapter 4, I conduct the first large-scale diachronic corpus study investigating the relationship between word length and frequency over time, using the Google Books Ngrams corpus and three different word lists covering both English and French. Focusing on words which have both long and short variants (e.g., info/information), I show that the frequency of a word lemma may influence the rate at which the shorter variant gains in popularity. This suggests that the lexicon as a whole may indeed be gradually evolving towards greater efficiency. Taken together, the behavioural and corpus-based evidence presented in this thesis supports the hypothesis that communicative pressures acting on language-users are at least partially responsible for the frequency-length and surprisal-length relationships found universally across lexicons. More generally, the approach taken in this thesis promotes a view of language as, among other things, an evolving, efficient code for information transfer.
|
4 |
FPGA Implementation of Short Word-Length AlgorithmsThakkar, Darshan Suresh, darshanst@gmail.com January 2008 (has links)
Short Word-Length refers to single-bit, two-bit or ternary processing systems. SWL systems use Sigma-Delta Modulation (SDM) technique to express an analogue or multi-bit input signal in terms of a high frequency single-bit stream. In Sigma-Delta Modulation, the input signal is coarsely quantized into a single-bit representation by sampling it at a much higher rate than twice the maximum input frequency viz. the Nyquist rate. This single-bit representation is almost exclusively filtered to remove conversion quantization noise and sample decimated to the Nyquist frequency in preparation for traditional signal processing. SWL algorithms have a huge potential in a variety of applications as they offer many advantages as compared to multi-bit approaches. Features of SWL include efficient hardware implementation, increased flexibility and massive cost savings. Field Programmable Gate Arrays (FPGAs) are SRAM/FLASH based integrated circuits that can be programmed and re-programmed by the end user. FPGAs are made up of arrays of logic gates, routing channels and I/O blocks. State-of-the-art FPGAs include features such as Advanced Clock Management, Dedicated Multipliers, DSP Slices, High Speed I/O and Embedded Microprocessors. A System-on-Programmable-Chip (SoPC) design approach uses some or all the aforementioned resources to create a complete processing system on the device itself, ensuring maximum silicon area utilization and higher speed by eliminating inter-chip communication overheads. This dissertation focuses on the application of SWL processing systems in audio Class-D Amplifiers and aims to prove the claims of efficient hardware implementation and higher speeds of operation. The analog Class-D Amplifier is analyzed and an SWL equivalent of the system is derived by replacing the analogue components with DSP functions wherever possible. The SWL Class-D Amplifier is implemented on an FPGA, the standard emulation platform, using VHSIC Hardware Description Languages (VHDL). The approach is taken a step forward by adding re-configurability and media selectivity and proposing SDM adaptivity to improve performance.
|
5 |
Electronic Dispersion Compensation For Interleaved A/D Converters in a Standard Cell ASIC ProcessClark, Matthew David 25 June 2007 (has links)
The IEEE 802.3aq standard recommends a multi-tap decision feedback equalizer be implemented to remove inter-symbol interference and additive system noise from data transmitted over a 10 Gigabit per Second (10 Gbps) multi-mode fiber-optic link (MMF). The recommended implementation produces a design in an analog process. This design process is difficult, time consuming, and is expensive to modify if first pass silicon success is not achieved.
Performing the majority of the design in a well-characterized digital process with stable, evolutionary tools reduces the technical risk. ASIC design rule checking is more predictable than custom tools flows and produces regular, repeatable results. Register Transfer Language (RTL) changes can also be relatively quickly implemented when compared to the custom flow. However, standard cell methodologies are expected to achieve clock rates of roughly one-tenth of the corresponding analog process.
The architecture and design for a parallel linear equalizer and decision feedback equalizer are presented. The presented design demonstrates an RTL implementation of 10 GHz filters operating in parallel at 625 MHz. The performance of the filters is characterized by testing the design against a set of 324 reference channels. The results are compared against the IEEE standard group s recommended implementation. The linear equalizer design of 20 taps equalizes 88 % of the reference channels. The decision feedback equalizer design of 20 forward and 1 reverse tap equalizes 93 % of the reference channels. Analysis of the unequalized channels in performed, and areas for continuing research are presented.
|
6 |
Wordlength inference in the Spade HDL : Seven implementations of wordlength inference and one implementation that actually works / Ordlängdsinferans i Spade HDL : Sju olika implementationer av ordlängdsinferens och en implementation som faktiskt fungerarThörnros, Edvard January 2023 (has links)
Compilers, complex programs with the potential to greatly facilitate software and hardware design. This thesis focuses on enhancing the Spade hardware description language, known for its user-friendly approach to hardware design. In the realm of hardware development data size - for numerical values data size is known as "wordlength" - plays a critical role for reducing the hardware resources. This study presents an innovative approach that seamlessly integrates wordlength inference directly into the Spade language, enabling the over-estimation of numeric data sizes solely from the program's source code. The methodology involves iterative development, incorporating various smaller implementations and evaluations, reminiscent of an agile approach. To assess the efficacy of the wordlength inference, multiple place and route operations are performed on identical Spade code using various versions of nextpnr. Surprisingly, no discernible impact on hardware resource utilization emerges from the modifications introduced in this thesis. Nonetheless, the true significance of this endeavor lies in its potential to unlock more advanced language features within the Spade compiler. It is important to note that while the wordlength inference proposed in this thesis shows promise, it necessitates further integration efforts to realize its full potential.
|
7 |
Evaluation of Word Length Effects on Multistandard Soft Decision Viterbi DecodingSalim, Ahmed January 2011 (has links)
There have been proposals of many parity inducing techniques like Forward ErrorCorrection (FEC) which try to cope the problem of channel induced errors to alarge extent if not completely eradicate. The convolutional codes have been widelyidentified to be very efficient among the known channel coding techniques. Theprocess of decoding the convolutionally encoded data stream at the receiving nodecan be quite complex, time consuming and memory inefficient.This thesis outlines the implementation of multistandard soft decision viterbidecoder and word length effects on it. Classic Viterbi algorithm and its variantsoft decision viterbi algorithm, Zero-tail termination and Tail-Biting terminationfor the trellis are discussed. For the final implementation in C language, the "Zero-Tail Termination" approach with soft decision Viterbi decoding is adopted. Thismemory efficient implementation approach is flexible for any code rate and anyconstraint length.The results obtained are compared with MATLAB reference decoder. Simulationresults have been provided which show the performance of the decoderand reveal the interesting trade-off of finite word length with system performance.Such investigation can be very beneficial for the hardware design of communicationsystems. This is of high interest for Viterbi algorithm as convolutional codes havebeen selected in several famous standards like WiMAX, EDGE, IEEE 802.11a,GPRS, WCDMA, GSM, CDMA 2000 and 3GPP-LTE.
|
8 |
Φωνολογική εργαζόμενη μνήμη σε παιδιά με χαμηλές αναγνωστικές και ορθογραφικές ικανότητεςΠαπακώστα, Δέσποινα 11 January 2010 (has links)
Σκοπός της παρούσας έρευνας ήταν ο έλεγχος της υπόθεσης, σύμφωνα με την οποία τα παιδιά με μαθησιακές δυσκολίες χρησιμοποιούν λιγότερο τη φωνολογική κωδικοποίηση και την επανάληψη. Στο πλαίσιο που είχαν εργαστεί νωρίτερα οι Steinbrink και Klatte, η έρευνα μελέτησε τις επιδόσεις 14 μαθητών της Β’ δημοτικού με χαμηλές αναγνωστικές και ορθογραφικές ικανότητες και 14 μαθητών ίδιας τάξης με υψηλές αντίστοιχες ικανότητες, σε έργα σειριακής ανάκλησης ερεθισμάτων. Τα ερεθίσματα ποίκιλαν ως προς τη φωνολογική ομοιότητα και το μέγεθος της λέξης. Η παρουσίαση τους έγινε οπτικά και ακουστικά και συνδυάστηκε με οπτική και προφορική ανάκληση, προκειμένου να ελεγχθούν οι στρατηγικές που επιλέγουν τα παιδιά με μαθησιακές δυσκολίες, ανάλογα με τις απαιτήσεις του έργου. Oι επιδόσεις των παιδιών με αναγνωστικές και ορθογραφικές αδυναμίες ήταν χαμηλότερες σε όλες τις συνθήκες, με εξαίρεση τη συνθήκη οπτικής παρουσίασης - οπτικής ανάκλησης. Ωστόσο, οι επιδράσεις της φωνολογικής ομοιότητας και του μεγέθους της λέξης δε διέφεραν ανάμεσα στις ομάδες. Επομένως, όλοι οι συμμετέχοντες έκαναν ίση χρήση της φωνολογικής κωδικοποίησης και της επανάληψης. Ακόμη, στις συνθήκες που ευνοούσαν τη χρήση οπτικών στρατηγικών, όλοι οι συμμετέχοντες προέβησαν σε ένα συνδυασμό φωνολογικών και οπτικών στρατηγικών. Τα αποτελέσματα της έρευνας οδηγούν στο συμπέρασμα ότι τα παιδιά με αναγνωστικές και ορθογραφικές αδυναμίες χρησιμοποιούν μεν το φωνολογικό κύκλωμα, αλλά με λιγότερο αποτελεσματικό τρόπο. / Τhe purpose of this study was to test the hypothesis that children with learning disabilities make less use of phonological coding and rehearsal. In the framework of Steinbrink and Klatte’s previous research, this study examined the performance of second-grade children with poor versus good reading and spelling abilities in serial recall tasks. The stimuli used, varied in phonological similarity and word length. Their presentation was visual and auditory and it was combined with visual and verbal recall, so that to investigate the strategies that children with learning disabilities use, depending on the task’s demands. The performance of children with reading and spelling difficulties was lower in all conditions, except the condition of visual presentation - visual recall. However, phonological similarity and word length effects did not differ between groups. Consequently, all participants made equal use of phonological coding and rehearsal. Furthermore, in conditions where visual strategies could be used, all participants used a combination of phonological and visual strategies. The results suggest that children with reading and spelling impairments use the phonological loop, but in a less efficient way.
|
9 |
Αλγόριθμοι επαναληπτικής αποκωδικοποίησης κωδικών LDPC και μελέτη της επίδρασης του σφάλματος κβαντισμού στην απόδοση του αλγορίθμου Log Sum-ProductΚάνιστρας, Νικόλαος 25 May 2009 (has links)
Οι κώδικες LDPC ανήκουν στην κατηγορία των block κωδικών. Πρόκειται για κώδικες ελέγχου σφαλμάτων μετάδοσης και πιο συγκεκριμένα για κώδικες διόρθωσης σφαλμάτων. Αν και η εφεύρεσή τους (από τον Gallager) τοποθετείται χρονικά στις αρχές της δεκαετίας του 60, μόλις τα τελευταία χρόνια κατάφεραν να κεντρίσουν το έντονο ενδιαφέρον της επιστημονικής-ερευνητικής κοινότητας για τις αξιόλογες επιδόσεις τους. Πρόκειται για κώδικες ελέγχου ισοτιμίας με κυριότερο χαρακτηριστικό τον χαμηλής πυκνότητας πίνακα ελέγχου ισοτιμίας (Low Density Parity Check) από τον οποίο και πήραν το όνομά τους. Δεδομένου ότι η κωδικοποίηση των συγκεκριμένων κωδικών είναι σχετικά απλή, η αποκωδικοποίηση τους είναι εκείνη η οποία καθορίζει σε μεγάλο βαθμό τα χαρακτηριστικά του κώδικα που μας ενδιαφέρουν, όπως είναι η ικανότητα διόρθωσης σφαλμάτων μετάδοσης (επίδοση) και η καταναλισκόμενη ισχύς. Για το λόγο αυτό έχουν αναπτυχθεί διάφοροι αλγόριθμοι αποκωδικοποίησης, οι οποίοι είναι επαναληπτικοί. Παρόλο που οι ανεπτυγμένοι αλγόριθμοι και οι διάφορες εκδοχές τους δεν είναι λίγοι, δεν έχει ακόμα καταστεί εφικτό να αναλυθεί θεωρητικά η επίδοσή τους.
Στην παρούσα εργασία παρατίθενται οι κυριότεροι αλγόριθμοι αποκωδικοποίησης κωδικών LDPC, που έχουν αναπτυχθεί μέχρι σήμερα. Οι αλγόριθμοι αυτοί υλοποιούνται και συγκρίνονται βάσει των αποτελεσμάτων εξομοιώσεων. Ο πιο αποδοτικός από αυτούς είναι ο αποκαλούμενος αλγόριθμος log Sum-Product και στηρίζει σε μεγάλο βαθμό την επίδοσή του σε μία αρκετά πολύπλοκή συνάρτηση, την Φ(x). Η υλοποίηση της τελευταίας σε υλικό επιβάλλει την πεπερασμένη ακρίβεια αναπαράστασής της, δηλαδή τον κβαντισμό της. Το σφάλμα κβαντισμού που εισάγεται από την διαδικασία αυτή θέτει ένα όριο στην επίδοση του αλγορίθμου. Η μελέτη που έγινε στα πλαίσια της εργασίας οδήγησε στον προσδιορισμό δύο μηχανισμών εισαγωγής σφάλματος κβαντισμού στον αλγόριθμο log Sum-Product και στη θεωρητική έκφραση της πιθανότητας εμφάνισης κάθε μηχανισμού κατά την πρώτη επανάληψη του αλγορίθμου.
Μελετήθηκε επίσης ο τρόπος με τον οποίο το εισαγόμενο σφάλμα κβαντισμού επιδρά στην απόφαση του αλγορίθμου στο τέλος της κάθε επανάληψης και αναπτύχθηκε ένα θεωρητικό μοντέλο αυτού του μηχανισμού. Το θεωρητικό μοντέλο δίνει την πιθανότητα αλλαγής απόφασης του αλγορίθμου λόγω του σφάλματος κβαντισμού της συνάρτησης Φ(x), χωρίς όμως να είναι ακόμα πλήρες αφού βασίζεται και σε πειραματικά δεδομένα. Η ολοκλήρωση του μοντέλου, ώστε να είναι πλήρως θεωρητικό, θα μπορούσε να αποτελέσει αντικείμενο μελλοντικής έρευνας, καθώς θα επιτρέψει τον προσδιορισμό του περιορισμού της επίδοσης του αλγορίθμου για συγκεκριμένο σχήμα κβαντισμού της συνάρτησης, αποφεύγοντας χρονοβόρες εξομοιώσεις. / Low-Density Parity-Check (LDPC) codes belong to the category of Linear Block Codes. They are error detection and correction codes. Although LDPC codes have been proposed by R. Gallager since 1962, they were scarcely considered in the 35 years that followed. Only in the end-90's they were rediscovered due to their decoding performance that approaches Shannon limit. As their name indicates they are parity check codes whose parity check matrix is sparse. Since the encoding process is simple, the decoding procedure determines the performance and the consumed power of the decoder. For this reason several iterative decoding algorithms have been developed. However theoretical determination of their performance has not yet been feasible.
This work presents the most important iterative decoding algorithms for LDPC codes, that have been developed to date. These algorithms are implemented in matlab and their performance is studied through simulation. The most powerful among them, namely Log Sum-Product, uses a very nonlinear function called Φ(x). Hardware implementation of this function enforces finite accuracy, due to finite word length representation. The roundoff error that this procedure imposes, impacts the decoding performance by means of two mechanisms. Both mechanisms are analyzed and a theoretical expression for each mechanism activation probability, at the end of the first iteration of the algorithm, is developed.
The impact of the roundoff error on the decisions taken by the log Sum-Product decoding algorithm at the end of each iteration is also studied. The mechanism by means of which roundoff alters the decisions of a finite word length implementation of the algorithm compared to the infinite precision case, is analyzed and a corresponding theoretical model is developed. The proposed model computes the probability of changing decisions due to finite word length representation of Φ(x), but it is not yet complete, since the determination of the corresponding parameters is achieved through experimental results. Further research focuses on the completion of the theoretical model, since it can lead to a tool that computes the expected degradation of the decoding performance for a particular implementation of the decoder, without the need of time-consuming simulations.
|
10 |
Ικανότητα σειριακής ανάκλησης σε μαθητές με αναγνωστικές και ορθογραφικές δυσκολίες : μια μελέτη των επιδράσεων της φωνολογικής ομοιότητας και του μήκους των λέξεωνΜαματά, Μαρία 08 July 2011 (has links)
Στην παρούσα ερευνητική εργασία που είναι επανάληψη της έρευνας των Steinbrink και Klatte (2008) γίνεται προσπάθεια να διερευνηθεί η σχέση ανάμεσα στην ικανότητα άμεσης σειριακής συγκράτησης φωνολογικών πληροφοριών και την αναγνωστική και ορθογραφική ικανότητα παιδιών, που έχουν ως μητρική γλώσσα την ελληνική. Πολλές έρευνες έχουν δείξει ότι παιδιά με αναγνωστικές και ορθογραφικές δυσκολίες δεν χρησιμοποιούν με τον πιο αποτελεσματικό τρόπο τις φωνολογικές στρατηγικές σε έργα σειριακής ανάκλησης. Σε μια ομάδα 15 μαθητών της Γ’ Δημοτικού χωρίς αναγνωστικές και ορθογραφικές δυσκολίες και σε μια αντίστοιχη ομάδα 15 μαθητών με αναγνωστικές και ορθογραφικές δυσκολίες, παρουσιάστηκαν λίστες με τέσσερα ερεθίσματα η κάθε μία, τα οποία αντιστοιχούσαν σε ουσιαστικά υψηλής συχνότητας, με σκοπό την άμεση σειριακή ανάκλησή τους. Το μέγεθος της λέξης και η φωνολογική ομοιότητα καθώς και ο τρόπος παρουσίασης (οπτικός και ακουστικός) και ο τύπος ανάκλησης (οπτικός και προφορικός) ποίκιλαν, σε ένα μεικτό σχεδιασμό με χειρισμό των ανεξάρτητων μεταβλητών εντός υποκειμένων. Σε όλες τις πειραματικές συνθήκες, οι καλοί αναγνώστες απέδωσαν καλύτερα από τους φτωχούς αναγνώστες. Η φωνολογική ομοιότητα δεν επηρέασε τις επιδόσεις και στις δυο ομάδες των παιδιών. Αντίθετα, η επίδραση του μεγέθους των λέξεων διέφερε μεταξύ των ομάδων, πράγμα που ίσως δείχνει ελλιπή φωνολογική κωδικοποίηση και εσωτερική επανάληψη στα παιδιά με αναγνωστικές και ορθογραφικές δυσκολίες. Αναφορικά με τη σειρά παρουσίασης του ερεθίσματος, οι δύο ομάδες μαθητών έκαναν χρήση παρόμοιων στρατηγικών στις περισσότερες πειραματικές συνθήκες. Τα αποτελέσματα δείχνουν ότι οι φτωχοί αναγνώστες χρησιμοποιούν το φωνολογικό κύκλωμα. Αντί αυτού, οι δυσκολίες αυτές πηγάζουν από την ανεπαρκή εφαρμογή διαφόρων στρατηγικών λόγω ελλειμμάτων στη φωνολογική επεξεργασία. / The current study sought to investigate the relation between serial recall of phonological information and reading ability in Greek students. It has been proposed that dyslexic readers show inefficient application of phonological strategies during serial recall tasks. A group of 15 third graders with typical reading performance and 15 with reading impairments were presented with four-item lists of common nouns for immediate serial recall. Word length and phonological similarity as well as presentation modality (visual vs. auditory) and type of recall (visual vs. verbal) were varied as within subject factors in a mixed design. In all conditions, overall performance was significantly lower in poor readers. Phonological similarity did not affect performance in both groups of children. Word length effects differed between groups indicating deficient phonological coding and rehearsal in dyslexic students. With regard to the order of presentation, the two groups made use of similar strategies in the majority of the experimental conditions. The results demonstrate that, poor readers use the phonological loop. Instead, their difficulties stem from inadequate application of various strategies due to deficits in phonological processing.
|
Page generated in 0.0402 seconds