• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 167
  • 120
  • 43
  • 16
  • 10
  • 6
  • 4
  • 4
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 434
  • 137
  • 132
  • 89
  • 75
  • 68
  • 65
  • 58
  • 57
  • 52
  • 51
  • 45
  • 42
  • 41
  • 32
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Högläsning i skolan : Arbetssätt för att främja elevers lärande.

Wittström, Amanda, Kristensson, Wilma, Karlsson, Hanna January 2024 (has links)
Vår intention med arbetet är att upptäcka olika metoder för att främja arbetet med högläsning. Syftet med litteraturöversikten är därför att granska tidigare forskning som redogör för olika metoder om högläsning för att gynna elevernas läsförmåga. Detta syftet leder till forskningsfrågan som litteraturöversikten utgår ifrån: Vilka metoder för högläsning gynnar utvecklingen av elevers läsförmåga? Datainsamlingen består av systematiska sökningar genom söksträngar i olika databaser som bland annat ERIC (EBSCO), Swepub, Scopus och DiVA. Forskningen är både internationell och nationell. Tematisk analys används för att analysera datainsamlingen. Utifrån analysen identifieras teman som utgör våra rubriker i resultatdelen. Studiens resultat visar att interaktiv högläsning är en metod som gynnar elevers literacyutveckling. Även andra faktorer såsom valet av bok och vid vilken tidpunkt på dagen påverkar främjandet av läsutvecklingen. Slutsatsen av litteraturstudien är att interaktiv högläsning, val av bok och tidpunkt med flera, främjar läskunnigheten och förståelsen. Ett aktivt lärande bidrar till en ökad motivation hos eleverna. / Our intention with this study is to discover different methods to promote the work with reading aloud. The purpose of this study is therefore to review previous research that outlines different approaches to reading aloud to benefit students' reading skills. This purpose leads to the research question on which this study is based: What methods of reading aloud benefit the development of students' reading skills? Data collection consists of systematic searches through search strings in various databases such as ERIC (EBSCO), Swepub, Scopus and Diva. The research is both international and national. Thematic analysis is used to analyze the data collection. Based on the analysis, themes are identified and form our headings in the results section. The results of the study shows that interactive reading aloud is a method that benefits students' literacy development. Other factors such as the choice of book and the time of day also influence the promotion of reading development. The conclusion of this study is that interactive reading aloud, choice of book and time of day, among others, promotes literacy and comprehension. Active learning contributes to increased motivation among students.
82

Vi hoppar över helvetesgapet och vi skyddar oss mot vildvittrorna : En kvalitativ studie om verksamma lärares aktiva arbete med högläsning / We jump over the hell mouth and we protect us against the wild imps. : A qualitative thesis on how working teachers actively use read-aloud in their education

Gillberg, Isabella January 2017 (has links)
Syftet med studien är att undersöka hur verksamma lärare ser på användning av högläsning som pedagogiskt redskap för att utveckla elevers läs- och skrivförmåga. I studien har fyra lärare som arbetar i årskurserna F-2 intervjuats kring högläsning som pedagogiskt redskap. Studien följde en semistrukturerad intervjuform och lärarna valdes ut genom ett så kallat snöbollsurval. Resultat visar att lärarna, precis som forskning visar att högläsning som pedagogiskt redskap har positiva effekter på elevers läs- och skrivutveckling. Lärarna poängterar att lärarens förhållningssätt till läsning är en av de bidragande faktorerna till elevers läsintresse men också att högläsning möjliggör en varierad undervisning. I studien framkommer det att lärarna anser att högläsning bidrar till utveckling av ordförråd, grammatik, förmågan att tolka  texter  och göra kopplingar till egna erfarenheter. / The purpose of this thesis is to study how working teachers view the use of read-aloud as a pedagogical tool to develop pupils’ reading and writing skills. Four primary- school teachers have been interviewed for this study and they were chosen through a snowball sampling. The result shows that both the teachers in this study and previous research consider read-aloud as a pedagogical tool to have a positive effect on pupils’ development of reading and writing skills. The teachers in this study point out that teachers’ attitude towards reading is one significant aspect to pupils’ reading interest, but also that read-aloud allows a variation in the education. In this study, the result shows that the teachers believe that read-aloud activities contribute to the development of vocabulary, grammar, the ability to interpret and    make    connections    to    their   own experiences.
83

The teaching of choral sight singing: analyzing and understanding experienced choral directors' perceptions and beliefs

Sanders, Ronald Byron 08 April 2016 (has links)
The purpose of this study was to analyze and understand experienced choral directors' perceptions and beliefs on a variety of topics surrounding the teaching and learning of secondary choral music sight singing or sight reading. A focus group of eight highly successful college, high school and middle school choral music educators addressed seven questions. The investigation gathered qualitative data that covered the purposes of teaching sight singing, the positive or negative attributes of movable Do, fixed Do and numbers, and a review of sight-singing curricula. Further, the investigation gathered data on the effect, if any, of an instrumental student's sight-singing ability and the use and effectiveness of Curwen or Kodály hand signs and sight-singing assessment for students. Additional data was gathered concerning how secondary music educators were evaluated. Results suggested that the focus group's purpose in teaching sight singing was to produce independent, self-reliant musicians. Individual sight-singing assessment was deemed important and should focus on how singers progressed. Music composed specifically for sight-singing contests or festivals should contain challenging notes and rhythms, dynamic changes, phrase markings and at least one tempo or meter change. Further, music teacher evaluations were discussed, coded and analyzed. Twenty-nine recommendations are offered that are designed to make sight singing more efficient and more effective in today's choral music classrooms. While there are some very good sight-singing materials in print, music publishers who contemplate printing new instructional material should offer a holistic approach to musicianship. Adjudicators for choral sight-singing festivals and contests should be trained. Choirs entering a sight-singing performance should be adjudicated on musical elements such as meter changes, correct tempi, phrasing, tone, articulation and dynamics, not merely on performing the correct notes and rhythms. Many more recommendations were offered to secondary and college choir teachers, supervisors, contest chairmen, adjudicators, composers, music publishers and students. The investigation was not intended to determine a recommended method for sight-singing instruction nor assessment. The purpose of this study was to understand and analyze experienced choral directors' perceptions and beliefs concerning sight singing on secondary campuses.
84

The mapping task and its various applications in next-generation sequencing

Otto, Christian 23 March 2015 (has links) (PDF)
The aim of this thesis is the development and benchmarking of computational methods for the analysis of high-throughput data from tiling arrays and next-generation sequencing. Tiling arrays have been a mainstay of genome-wide transcriptomics, e.g., in the identification of functional elements in the human genome. Due to limitations of existing methods for the data analysis of this data, a novel statistical approach is presented that identifies expressed segments as significant differences from the background distribution and thus avoids dataset-specific parameters. This method detects differentially expressed segments in biological data with significantly lower false discovery rates and equivalent sensitivities compared to commonly used methods. In addition, it is also clearly superior in the recovery of exon-intron structures. Moreover, the search for local accumulations of expressed segments in tiling array data has led to the identification of very large expressed regions that may constitute a new class of macroRNAs. This thesis proceeds with next-generation sequencing for which various protocols have been devised to study genomic, transcriptomic, and epigenomic features. One of the first crucial steps in most NGS data analyses is the mapping of sequencing reads to a reference genome. This work introduces algorithmic methods to solve the mapping tasks for three major NGS protocols: DNA-seq, RNA-seq, and MethylC-seq. All methods have been thoroughly benchmarked and integrated into the segemehl mapping suite. First, mapping of DNA-seq data is facilitated by the core mapping algorithm of segemehl. Since the initial publication, it has been continuously updated and expanded. Here, extensive and reproducible benchmarks are presented that compare segemehl to state-of-the-art read aligners on various data sets. The results indicate that it is not only more sensitive in finding the optimal alignment with respect to the unit edit distance but also very specific compared to most commonly used alternative read mappers. These advantages are observable for both real and simulated reads, are largely independent of the read length and sequencing technology, but come at the cost of higher running time and memory consumption. Second, the split-read extension of segemehl, presented by Hoffmann, enables the mapping of RNA-seq data, a computationally more difficult form of the mapping task due to the occurrence of splicing. Here, the novel tool lack is presented, which aims to recover missed RNA-seq read alignments using de novo splice junction information. It performs very well in benchmarks and may thus be a beneficial extension to RNA-seq analysis pipelines. Third, a novel method is introduced that facilitates the mapping of bisulfite-treated sequencing data. This protocol is considered the gold standard in genome-wide studies of DNA methylation, one of the major epigenetic modifications in animals and plants. The treatment of DNA with sodium bisulfite selectively converts unmethylated cytosines to uracils, while methylated ones remain unchanged. The bisulfite extension developed here performs seed searches on a collapsed alphabet followed by bisulfite-sensitive dynamic programming alignments. Thus, it is insensitive to bisulfite-related mismatches and does not rely on post-processing, in contrast to other methods. In comparison to state-of-the-art tools, this method achieves significantly higher sensitivities and performs time-competitive in mapping millions of sequencing reads to vertebrate genomes. Remarkably, the increase in sensitivity does not come at the cost of decreased specificity and thus may finally result in a better performance in calling the methylation rate. Lastly, the potential of mapping strategies for de novo genome assemblies is demonstrated with the introduction of a new guided assembly procedure. It incorporates mapping as major component and uses the additional information (e.g., annotation) as guide. With this method, the complete mitochondrial genome of Eulimnogammarus verrucosus has been successfully assembled even though the sequencing library has been heavily dominated by nuclear DNA. In summary, this thesis introduces algorithmic methods that significantly improve the analysis of tiling array, DNA-seq, RNA-seq, and MethylC-seq data, and proposes standards for benchmarking NGS read aligners. Moreover, it presents a new guided assembly procedure that has been successfully applied in the de novo assembly of a crustacean mitogenome. / Diese Arbeit befasst sich mit der Entwicklung und dem Benchmarken von Verfahren zur Analyse von Daten aus Hochdurchsatz-Technologien, wie Tiling Arrays oder Hochdurchsatz-Sequenzierung. Tiling Arrays bildeten lange Zeit die Grundlage für die genomweite Untersuchung des Transkriptoms und kamen beispielsweise bei der Identifizierung funktioneller Elemente im menschlichen Genom zum Einsatz. In dieser Arbeit wird ein neues statistisches Verfahren zur Auswertung von Tiling Array-Daten vorgestellt. Darin werden Segmente als exprimiert klassifiziert, wenn sich deren Signale signifikant von der Hintergrundverteilung unterscheiden. Dadurch werden keine auf den Datensatz abgestimmten Parameterwerte benötigt. Die hier vorgestellte Methode erkennt differentiell exprimierte Segmente in biologischen Daten bei gleicher Sensitivität mit geringerer Falsch-Positiv-Rate im Vergleich zu den derzeit hauptsächlich eingesetzten Verfahren. Zudem ist die Methode bei der Erkennung von Exon-Intron Grenzen präziser. Die Suche nach Anhäufungen exprimierter Segmente hat darüber hinaus zur Entdeckung von sehr langen Regionen geführt, welche möglicherweise eine neue Klasse von macroRNAs darstellen. Nach dem Exkurs zu Tiling Arrays konzentriert sich diese Arbeit nun auf die Hochdurchsatz-Sequenzierung, für die bereits verschiedene Sequenzierungsprotokolle zur Untersuchungen des Genoms, Transkriptoms und Epigenoms etabliert sind. Einer der ersten und entscheidenden Schritte in der Analyse von Sequenzierungsdaten stellt in den meisten Fällen das Mappen dar, bei dem kurze Sequenzen (Reads) auf ein großes Referenzgenom aligniert werden. Die vorliegende Arbeit stellt algorithmische Methoden vor, welche das Mapping-Problem für drei wichtige Sequenzierungsprotokolle (DNA-Seq, RNA-Seq und MethylC-Seq) lösen. Alle Methoden wurden ausführlichen Benchmarks unterzogen und sind in der segemehl-Suite integriert. Als Erstes wird hier der Kern-Algorithmus von segemehl vorgestellt, welcher das Mappen von DNA-Sequenzierungsdaten ermöglicht. Seit der ersten Veröffentlichung wurde dieser kontinuierlich optimiert und erweitert. In dieser Arbeit werden umfangreiche und auf Reproduzierbarkeit bedachte Benchmarks präsentiert, in denen segemehl auf zahlreichen Datensätzen mit bekannten Mapping-Programmen verglichen wird. Die Ergebnisse zeigen, dass segemehl nicht nur sensitiver im Auffinden von optimalen Alignments bezüglich der Editierdistanz sondern auch sehr spezifisch im Vergleich zu anderen Methoden ist. Diese Vorteile sind in realen und simulierten Daten unabhängig von der Sequenzierungstechnologie oder der Länge der Reads erkennbar, gehen aber zu Lasten einer längeren Laufzeit und eines höheren Speicherverbrauchs. Als Zweites wird das Mappen von RNA-Sequenzierungsdaten untersucht, welches bereits von der Split-Read-Erweiterung von segemehl unterstützt wird. Aufgrund von Spleißen ist diese Form des Mapping-Problems rechnerisch aufwendiger. In dieser Arbeit wird das neue Programm lack vorgestellt, welches darauf abzielt, fehlende Read-Alignments mit Hilfe von de novo Spleiß-Information zu finden. Es erzielt hervorragende Ergebnisse und stellt somit eine sinnvolle Ergänzung zu Analyse-Pipelines für RNA-Sequenzierungsdaten dar. Als Drittes wird eine neue Methode zum Mappen von Bisulfit-behandelte Sequenzierungsdaten vorgestellt. Dieses Protokoll gilt als Goldstandard in der genomweiten Untersuchung der DNA-Methylierung, einer der wichtigsten epigenetischen Modifikationen in Tieren und Pflanzen. Dabei wird die DNA vor der Sequenzierung mit Natriumbisulfit behandelt, welches selektiv nicht methylierte Cytosine zu Uracilen konvertiert, während Methylcytosine davon unberührt bleiben. Die hier vorgestellte Bisulfit-Erweiterung führt die Seed-Suche auf einem reduziertem Alphabet durch und verifiziert die erhaltenen Treffer mit einem auf dynamischer Programmierung basierenden Bisulfit-sensitiven Alignment-Algorithmus. Das verwendete Verfahren ist somit unempfindlich gegenüber Bisulfit-Konvertierungen und erfordert im Gegensatz zu anderen Verfahren keine weitere Nachverarbeitung. Im Vergleich zu aktuell eingesetzten Programmen ist die Methode sensitiver und benötigt eine vergleichbare Laufzeit beim Mappen von Millionen von Reads auf große Genome. Bemerkenswerterweise wird die erhöhte Sensitivität bei gleichbleibend guter Spezifizität erreicht. Dadurch könnte diese Methode somit auch bessere Ergebnisse bei der präzisen Bestimmung der Methylierungsraten erreichen. Schließlich wird noch das Potential von Mapping-Strategien für Assemblierungen mit der Einführung eines neuen, Kristallisation-genanntes Verfahren zur unterstützten Assemblierung aufgezeigt. Es enthält Mapping als Hauptbestandteil und nutzt Zusatzinformation (z.B. Annotationen) als Unterstützung. Dieses Verfahren ermöglichte die erfolgreiche Assemblierung des kompletten mitochondrialen Genoms von Eulimnogammarus verrucosus trotz einer vorwiegend aus nukleärer DNA bestehenden genomischen Bibliothek. Zusammenfassend stellt diese Arbeit algorithmische Methoden vor, welche die Analysen von Tiling Array, DNA-Seq, RNA-Seq und MethylC-Seq Daten signifikant verbessern. Es werden zudem Standards für den Vergleich von Programmen zum Mappen von Daten der Hochdurchsatz-Sequenzierung vorgeschlagen. Darüber hinaus wird ein neues Verfahren zur unterstützten Genom-Assemblierung vorgestellt, welches erfolgreich bei der de novo-Assemblierung eines mitochondrialen Krustentier-Genoms eingesetzt wurde.
85

Genome Informatics for High-Throughput Sequencing Data Analysis: Methods and Applications

Hoffmann, Steve 17 September 2014 (has links)
This thesis introduces three different algorithmical and statistical strategies for the analysis of high-throughput sequencing data. First, we introduce a heuristic method based on enhanced suffix arrays to map short sequences to larger reference genomes. The algorithm builds on the idea of an error-tolerant traversal of the suffix array for the reference genome in conjunction with the concept of matching statistics introduced by Chang and a bitvector based alignment algorithm proposed by Myers. The algorithm supports paired-end and mate-pair alignments and the implementation offers methods for primer detection, primer and poly-A trimming. In our own benchmarks as well as independent bench- marks this tool outcompetes other currently available tools with respect to sensitivity and specificity in simulated and real data sets for a large number of sequencing protocols. Second, we introduce a novel dynamic programming algorithm for the spliced alignment problem. The advantage of this algorithm is its capability to not only detect co-linear splice events, i.e. local splice events on the same genomic strand, but also circular and other non-collinear splice events. This succinct and simple algorithm handles all these cases at the same time with a high accuracy. While it is at par with other state- of-the-art methods for collinear splice events, it outcompetes other tools for many non-collinear splice events. The application of this method to publically available sequencing data led to the identification of a novel isoform of the tumor suppressor gene p53. Since this gene is one of the best studied genes in the human genome, this finding is quite remarkable and suggests that the application of our algorithm could help to identify a plethora of novel isoforms and genes. Third, we present a data adaptive method to call single nucleotide variations (SNVs) from aligned high-throughput sequencing reads. We demonstrate that our method based on empirical log-likelihoods automatically adjusts to the quality of a sequencing experiment and thus renders a \"decision\" on when to call an SNV. In our simulations this method is at par with current state-of-the-art tools. Finally, we present biological results that have been obtained using the special features of the presented alignment algorithm. / Diese Arbeit stellt drei verschiedene algorithmische und statistische Strategien für die Analyse von Hochdurchsatz-Sequenzierungsdaten vor. Zuerst führen wir eine auf enhanced Suffixarrays basierende heuristische Methode ein, die kurze Sequenzen mit grossen Genomen aligniert. Die Methode basiert auf der Idee einer fehlertoleranten Traversierung eines Suffixarrays für Referenzgenome in Verbindung mit dem Konzept der Matching-Statistik von Chang und einem auf Bitvektoren basierenden Alignmentalgorithmus von Myers. Die vorgestellte Methode unterstützt Paired-End und Mate-Pair Alignments, bietet Methoden zur Erkennung von Primersequenzen und zum trimmen von Poly-A-Signalen an. Auch in unabhängigen Benchmarks zeichnet sich das Verfahren durch hohe Sensitivität und Spezifität in simulierten und realen Datensätzen aus. Für eine große Anzahl von Sequenzierungsprotokollen erzielt es bessere Ergebnisse als andere bekannte Short-Read Alignmentprogramme. Zweitens stellen wir einen auf dynamischer Programmierung basierenden Algorithmus für das spliced alignment problem vor. Der Vorteil dieses Algorithmus ist seine Fähigkeit, nicht nur kollineare Spleiß- Ereignisse, d.h. Spleiß-Ereignisse auf dem gleichen genomischen Strang, sondern auch zirkuläre und andere nicht-kollineare Spleiß-Ereignisse zu identifizieren. Das Verfahren zeichnet sich durch eine hohe Genauigkeit aus: während es bei der Erkennung kollinearer Spleiß-Varianten vergleichbare Ergebnisse mit anderen Methoden erzielt, schlägt es die Wettbewerber mit Blick auf Sensitivität und Spezifität bei der Vorhersage nicht-kollinearer Spleißvarianten. Die Anwendung dieses Algorithmus führte zur Identifikation neuer Isoformen. In unserer Publikation berichten wir über eine neue Isoform des Tumorsuppressorgens p53. Da dieses Gen eines der am besten untersuchten Gene des menschlichen Genoms ist, könnte die Anwendung unseres Algorithmus helfen, eine Vielzahl weiterer Isoformen bei weniger prominenten Genen zu identifizieren. Drittens stellen wir ein datenadaptives Modell zur Identifikation von Single Nucleotide Variations (SNVs) vor. In unserer Arbeit zeigen wir, dass sich unser auf empirischen log-likelihoods basierendes Modell automatisch an die Qualität der Sequenzierungsexperimente anpasst und eine \"Entscheidung\" darüber trifft, welche potentiellen Variationen als SNVs zu klassifizieren sind. In unseren Simulationen ist diese Methode auf Augenhöhe mit aktuell eingesetzten Verfahren. Schließlich stellen wir eine Auswahl biologischer Ergebnisse vor, die mit den Besonderheiten der präsentierten Alignmentverfahren in Zusammenhang stehen.
86

Brott och straff i lättläst adaption : En komparativ analys av Fjodor Dostojevskijs Brott och straff och romanen i lättläst bearbetning / Crime and punishment in easy-to-read adaptation : A comparative analysis of Fyodor Dostoevsky’s Crime and punishment and the novel in easy-to-read adaptation

Wahlström, Fredrik January 2021 (has links)
In the current landscape of Sweden, easy-to-read literature demand is surging. But the opinions of its shape and form are being questioned: is this type of literature in reality easier to understand, or is the removed content in fact making it less comprehensive? The study’s purpose is to analyze easy-to-read literature’s ability to manage complex motives, and if it changes the reading experience from the original. To attempt that, the study uses a comparative analysis of Fyodor Dostoevsky’s Crime and punishment and its easy-to-read adaptation, written by Johan Werkmäster. To understand how Werkmäster’s adaptation manages complex motives, the comparative analysis uses three motives from Professor George Strem, who argues that Crime and punishment is built on suffering, humility and the ideal of the Superman. To answer whether the easy-to-read version changes the reading experience or not, the study uses Professor Rita Felski’s terminology of chock and Anna Nordenstam and Christina Olin-Scheller’s identification.   The analysis shows that the easy-to-read version of Crime and punishment maintains the main story but excludes the ideal of the Superman, which alters the understanding of suffering and humility. As a consequence this also means Felski’s chock is replaced by Nordenstam and Olin-Scheller’s identification, showing that the reading experience is indeed changed.    The conclusion is that the changes in the adaptation cause problems regarding the reader’s ability to understand Raskolnikov. The result also leads to question what the most important factor is when reading easy-to-read literature. Whereas the authors and publishers argue that literacy and motivation is essential to the target audience, Felski and Nordenstam and Olin-Scheller claim that the reading experience is more important. This shows the need of more science in this area to create a consensus of what easy-to-read literature should be, and how it best helps the readers develop their reading ability.
87

Facilitating Improved Reading Fluency in a Rural School District using Cross-Age Peer Tutoring

McMullin, William Arrel 09 May 2015 (has links)
Peer tutoring as an instructional strategy has been used by school personnel to increase academic achievement in the classroom setting. Traditionally, the peer tutoring concept relies on student partnerships linking higher achieving students with lower achieving students for structured reading sessions. Recently, new studies have focused on linking students with comparable reading achievements or cross-age peer tutors. Research suggests that using peer tutors may promote higher reading fluency in at-risk students as compared to teacher instruction. A potential reason for this phenomenon includes students’ comfort level with peers allowing for a more easy development of reading growth. The purpose of this study is to determine the effectiveness, efficiency, and scalability of cross-age peer tutoring on reading fluency and reading comprehension. The study involved 7 fifth grade struggling readers as tutors to 7 third grade struggling readers. Reading to Read was used as the intervention protocol. The dyads met for 5 weeks with progress monitoring conducted at the beginning of each week. Results indicated consistent benefit in improving reading fluency in 13 of the 14 participants. Several implications to the study can be identified. Peer assisted learning can benefit both participants in reading fluency. Participating in the peer assisted learning process improves the attitudes toward reading of below grade level readers. Further implications, limitations, and future research relating to the results of this study are also discussed.
88

Att identifiera lässvårigheter hos elever i förskoleklass : En kvalitativ studie av sju lärares åsikter om möjligheter och svårigheter med tidig identifikation / Early identification of reading difficulties in young learners : A qualitative study of seven teachers’ views on advantages and difficulties with early identification

Strandberg, Christin January 2016 (has links)
En av lärarens allra viktigaste uppgifter är att lära alla elever att läsa. Tyvärr finns det alltid elever som stöter på problem i sin läsinlärning och som av en eller annan anledning utvecklar lässvårigheter. Forskning har visat att tidiga insatser är mest verkningsfulla vilket innebär att lärare måste klara av att tidigt identifiera de elever som kan komma att behöva extra stöd. Syftet med detta arbete är att synliggöra lärares åsikter kring tidig identifiering av de elever som riskerar att utveckla lässvårigheter samt att undersöka hur en eventuell identifiering går till i praktiken.   För att uppnå detta har kvalitativa intervjuer genomförts med sju lärare verksamma i förskoleklass. Intervjuerna har berört lärarnas syn på tidig identifiering, om möjligheter och svårigheter med tidig identifiering samt hur eventuell identifiering går till i praktiken. Resultatet visar att lärare anser att tidig identifiering av lässvårigheter är fördelaktigt och viktigt. Genom informella observationer av elevers språkutveckling och resultat från mer formella kartläggningar anser lärarna att de klarar av att identifiera de elever som ligger i riskzonen för att utveckla lässvårigheter. / One of the most important tasks teachers have is to teach all students how to read. Unfortunately, there are always students who encounter difficulties in their reading development and who, for one reason or another, develop reading difficulties. Research has shown that early intervention is most effective, which means that teachers need to identify these difficulties in students at an early stage. The purpose of this study is to highlight teachers’ opinions about early identification of those at risk of developing reading difficulties and investigate how teachers manage to conduct these identifications in practice.   This study consists of qualitative interviews with seven teachers in preschool class. The interviews have been focused around teachers’ views of early identification, on advantages and difficulties with early identification and how early identifications are done in practice. The results show that teachers believe that early identification of reading difficulties is advantageous and important. Through informal observations of students’ language development and results from more formal tests teachers believe that they are able to identify students who are at risk of developing reading difficulties.
89

New data-driven approaches to text simplification

Štajner, Sanja January 2016 (has links)
No description available.
90

Read/write assist circuits and SRAM design

Nguyen, Quocdat Tai 23 September 2010 (has links)
This report discusses the design of read/write assist circuits which are used in a SRAM cell’s design to overcome the cell’s variations. It also explains the variability problems in a SRAM bit-cell and many approaches to address them. The basic operations, SNM concept, and write margin of an SRAM are described theoretically as well as measured in simulation. The write assisted circuit, the Negative Bit-line Voltage Bias scheme, is discussed and implemented at transistor level using a six-transistor (6T) SRAM cell. With the write assisted circuit, the implemented memory array successfully performs a write operation at 0.6V and -25°C, the condition in which the same operation would fail without the write assisted circuit. During the simulation, this write assisted circuit helps to achieve the negative bias voltage of -70mV on the SRAM’s bit-lines. The cost overhead includes chip area, power consumption, and current leakage when this Negative Bit-line Voltage scheme is implemented. / text

Page generated in 0.0379 seconds