• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 94
  • 61
  • 27
  • 22
  • 7
  • 6
  • 5
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 263
  • 52
  • 51
  • 44
  • 37
  • 34
  • 32
  • 27
  • 24
  • 21
  • 20
  • 19
  • 19
  • 18
  • 18
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
181

Analys av samtal mellan personer med afasi och logopeder/anhöriga : Användande av kommunikativa resurser i samarbete mot gemensam förståelse

Johansson, Sofia, Östlund, Pernilla January 2007 (has links)
I föreliggande studie undersöktes kommunikativa resurser i samtal mellan tre personer med afasi och deras respektive logoped/anhörig. Syftet var att identifiera och analysera resurser som samtalsdeltagarna gemensamt använde för att uppnå intersubjektivitet. Vidare undersöktes om det, utifrån vem personen med afasi samtalade med, fanns någon inverkan på hur de kommunikativa resurserna användes. Sex dyader spelades in och samtalsanalys användes för att studera materialet. Samtalsämnen valdes fritt av samtalsdeltagarna. Analysen resulterade i identi¬fie¬ring av tre bevarade resurser hos personerna med afasi; upprepningar, gester och skratt. Det gemensamma för dessa resurser var; att de förekom i den naturliga sekventialiteten i samtalet, att deras kommunikativa funktioner uppstod i samarbetet mellan samtalsdeltagarna och att de bidrog till intersubjektiviteten. Resurserna visade även på en social kompetens hos deltagarna med afasi som ofta döljs av de språkliga hindren. Då de kommunikativa resurserna fick liknande funktioner i de olika dyaderna, är resultatet troligen generaliserbart till annan interaktion där personer med afasi deltar. Resursernas kommunikativa funktioner föreföll inte bero på vem samtalspartnern var. Slutsatsen var att språklig kompetens hos personer med afasi framför allt bör ses ur ett inter¬aktivt perspektiv och att ett sådant förhållningssätt är önskvärt i logopediskt behandlingsarbete. / Analysis of Talk-in-Interaction involving People with Aphasia and Speech and Language Pathologists/Spouses: The Use of Communicative Resources in Collaboration toward Intersubjectivity. In the present study, communicative resources in conversations involving three persons with aphasia and their speech and language pathologists/spouses were investigated. The main purpose was to identify and analyse collaboratively used resources for achieving intersubjectivity. In addition, possible changes in the use of the communicative resources depending on conversational partner were investigated. The conversations of six dyads were recorded and analysed. Conversational topics were chosen by the participants. Through the analysis three preserved resources for participants with aphasia could be identified; repetitions, gestures and laughter. All resources occurred within the natural sequentiality of the conversation, their communicative functions emerged in collaboration of the participants and contributed to intersubjectivity. The social competence of the persons with aphasia, often concealed by the language impairment, was also revealed. The results may possibly be generalised to other communicative activities where persons with aphasia participate, since similar communicative functions were achieved through the same resources in different dyads. The functions of the resources were not determined by conversational partner. To conclude, it may be argued that linguistic competence of persons with aphasia should mainly be considered in an interactive perspective and that this view is preferred in language therapy.
182

Seeing Otherwise : Renegotiating Religion and Democracy as Questions for Education

Bergdahl, Lovisa January 2010 (has links)
Rooted in philosophy of education, the overall purpose of this dissertation is to renegotiate the relationship between education, religion, and democracy by placing the religious subject at the centre of this renegotiation. While education is the main focus, the study draws its energy from the fact that tensions around religious beliefs and practices seem to touch upon the very heart of liberal democracy. The study reads the tensions religious pluralism seems to be causing in contemporary education through a post-structural approach to difference and subjectivity. The purpose is accomplished in three movements. The first aims to show why the renegotiation is needed by examining how the relationship between education, democracy, and religion is currently being addressed in cosmopolitan education and deliberative education. The second movement introduces a model of democracy, radical democracy, that sees the process of defining the subject as a political process. It is argued that this model offers possibilities for seeing religion and the religious subject as part of the struggle for democracy. The third movement aims to develop how the relationship between education, democracy, and religion might change if we bring them together in a conversation whose conditions are not ‘owned’ by any one of them. To create this conversation, Hannah Arendt, Jacques Derrida, Søren Kierkegaard, and Emmanuel Levinas are brought together around three themes – love, freedom, and dialogue – referred to as ‘windows.’ The windows offer three examples in which religious subjectivity is made manifest but they also create a shift in perspective that invites other ways of seeing the tensions between religion and democracy. The aim of the study is to discuss how education might change when religion and democracy become questions for it through the perspectives offered in the windows and what this implies for the particular religious subject.
183

Analys av samtal mellan personer med afasi och logopeder/anhöriga : Användande av kommunikativa resurser i samarbete mot gemensam förståelse

Johansson, Sofia, Östlund, Pernilla January 2007 (has links)
<p>I föreliggande studie undersöktes kommunikativa resurser i samtal mellan tre personer med afasi och deras respektive logoped/anhörig. Syftet var att identifiera och analysera resurser som samtalsdeltagarna gemensamt använde för att uppnå intersubjektivitet. Vidare undersöktes om det, utifrån vem personen med afasi samtalade med, fanns någon inverkan på hur de kommunikativa resurserna användes. Sex dyader spelades in och samtalsanalys användes för att studera materialet. Samtalsämnen valdes fritt av samtalsdeltagarna. Analysen resulterade i identi¬fie¬ring av tre bevarade resurser hos personerna med afasi; upprepningar, gester och skratt. Det gemensamma för dessa resurser var; att de förekom i den naturliga sekventialiteten i samtalet, att deras kommunikativa funktioner uppstod i samarbetet mellan samtalsdeltagarna och att de bidrog till intersubjektiviteten. Resurserna visade även på en social kompetens hos deltagarna med afasi som ofta döljs av de språkliga hindren. Då de kommunikativa resurserna fick liknande funktioner i de olika dyaderna, är resultatet troligen generaliserbart till annan interaktion där personer med afasi deltar. Resursernas kommunikativa funktioner föreföll inte bero på vem samtalspartnern var. Slutsatsen var att språklig kompetens hos personer med afasi framför allt bör ses ur ett inter¬aktivt perspektiv och att ett sådant förhållningssätt är önskvärt i logopediskt behandlingsarbete.</p> / <p>Analysis of Talk-in-Interaction involving People with Aphasia and Speech and Language Pathologists/Spouses: The Use of Communicative Resources in Collaboration toward Intersubjectivity.</p><p>In the present study, communicative resources in conversations involving three persons with aphasia and their speech and language pathologists/spouses were investigated. The main purpose was to identify and analyse collaboratively used resources for achieving intersubjectivity. In addition, possible changes in the use of the communicative resources depending on conversational partner were investigated. The conversations of six dyads were recorded and analysed. Conversational topics were chosen by the participants. Through the analysis three preserved resources for participants with aphasia could be identified; repetitions, gestures and laughter. All resources occurred within the natural sequentiality of the conversation, their communicative functions emerged in collaboration of the participants and contributed to intersubjectivity. The social competence of the persons with aphasia, often concealed by the language impairment, was also revealed. The results may possibly be generalised to other communicative activities where persons with aphasia participate, since similar communicative functions were achieved through the same resources in different dyads. The functions of the resources were not determined by conversational partner. To conclude, it may be argued that linguistic competence of persons with aphasia should mainly be considered in an interactive perspective and that this view is preferred in language therapy.</p>
184

Embleme im interkulturellen Vergleich.

Merz, Andreas 05 November 2013 (has links) (PDF)
The following thesis highlights intercultural misunderstandings that can occur due to different semantic values of emblems, a gesture generally defined as having an accepted verbal translation in a certain culture or community. To illustrate such problems, the semantic meanings of sixteen emblems in Colombia and Germany are analyzed. Misunderstandings caused by the use of these emblems are then explained from a communicative point of view, using Austin’s speech-act theory.
185

輔助視障者以聲音記錄日常生活之手機介面研究 / HearMe: assisting the visually impaired to record vibrant moments of everyday life

蔡宜璇, Tsai, Yi Hsuan Unknown Date (has links)
視覺障礙者主要透過聲音來記錄生活與體驗世界,如同明眼人以文字或相片記憶重要時刻一般。然而觀察現有錄音裝置與軟體,皆尚未能提供適合視障者使用的完善錄音流程;即使是有語音功能的輔助裝置,不僅其價格、軟體更新、硬體維修等因市場小眾較為不易,也因為只是單純的錄音工具而無法流暢的銜接後續的檔案整理與分享。直到近幾年智慧型手機的興起,藉著其為市場主流產品、有豐富的軟體支援、隨時可連上網路等條件,逐漸成為視障者更好的輔助裝置的選擇。 為使視障者也能順利的操作觸控式介面,過往研究者針對Eyes-free情境提出了相關設計原則,而現今手機作業系統也大多內建螢幕報讀機制,讓視障者能自在與自信的與手機互動。雖然手機的可及性與通用性越來越受重視,專門為視障者設計的軟體卻並不多,輔助功能的開發資源和準則也待進一步的發展。本研究於初期的使用者觀察與訪談中,先深入了解視障者利用聲音記事的習慣與遇到的困難,並再進行初步設計方案的功能需求訪談,以切合使用者實際錄音的情境。 綜上所述,本研究為以視覺障礙者為目標使用族群,於觸控式手機設計錄音輔助軟體「HearMe」,解決視障者日常生活中聲音記錄的問題,並嘗試以手勢和語音設計,探索視障者操作觸控式介面的可能性。本系統原型有幾項重要特點:(1)快捷手勢可開始或結束錄音、(2)標記與播放重點段落、(3)即時編輯與歸類檔案、(4)以分類清單快速搜尋、(5)行動通訊立即分享;其他特色功能包括語音檔名、快轉與倒轉手勢、自訂群組和地標等。本系統原型開發時運用迭代設計流程共實作三次週期,每個版本皆經過設計、測試、分析、和調整功能等過程,逐步驗證系統的可行性與實用性。 經過三次的設計循環與共計18位視障者參與測試,本研究於第三版系統原型完成能實際應用在生活中的錄音軟體。受測者認為HearMe操作簡單、容易學習,快速播放重點段落省時省力、分類清楚而方便尋找檔案;同時它能夠以完善的語音提示和整合的錄音流程彌補現有裝置不足的部分,讓手機成為生活中記錄聲音的最佳輔助工具。最後,本研究以Google Analysis分析HearMe實際使用數據,並搭配訪談回饋總結系統設計的成果與互動設計之建議,提供HearMe或其他開發者做為日後設計的參考。 / The auditory sense is the primary channel for the visually impaired to experience the world, just as sighted people using words and photos to capture important moment. However, current recording devices mostly don’t have compact recording flows for the visually impaired. The devices with voice feedback are very expensive, nearly no softwares updated, and lack of maintenance supports due to the minority of the market. Also, these devices only can record and play but not organizing and sharing files with others. In recent years, smartphone’s popularity has been rising. It is the mainstream product with variety softwares and can be always online, showing the potential to become alternative accessible device for the visually impaired. In order to allow the visually impaired to use the touch screens, researchers have presented several design principles under eyes-free situations. Moreover, screen readers are embedded into smartphone operating systems like iOS and Android, which enable the visually impaired to freely and comfortably interact with smartphones. While the accessibility and universality of smartphones have been noticed, there are few applications tailored for their use, and the accessibility resources and principles need to be developed. In the first phase of user interview, we investigated their behaviors and difficulties when recording. After the design strategy has been made, we planned a second interview to verify if the functions we defined are suitable for their actual needs. This study focus on the visually impaired and tries to resolve the recording and memorizing problems they faced everyday by developing an accessible recording application on smartphone. The prototype, HearMe, provides specialized gestures and voice feedback. Followings are the highlight features of HearMe: (1) Short-cut gesture to start and finish recording, (2) marking and playing important parts, (3) editing and grouping files on device, (4) rapid searching by classified lists, and (5) real-time sharing. Other features include audio file name, gestures to play forward or backward, and custom groups and landmarks. While developing, this prototype applied iterative design process and repeated the flow cycle for three times. Every generation has been through steps of design, testing, analyzing, and modifying; by this approach, system’s usabilities can be gradually improved. After three cycles of design process which involved total 18 participants, we present a recording application that can use in real life. Participants command that HearMe is easy to operate and learn, playing by parts saving a lot of effort, and structured grouping helps file searching. Additionally, it provides well-defined audio feedbacks and integrated recording flow, complementing the shortcomings current devices have. These advantages make HearMe become the best tool to assist them for recording sounds during everyday life. This study finally concludes design considerations and suggestions by discussing usage data from Google Analytics and interview feedbacks, provides references for other assistive developers.
186

The dependency relations within Xhosa phonological processes

Podile, Kholisa 30 June 2002 (has links)
See file
187

Designing expressive interaction techniques for novices inspired by expert activities : the case of musical practice

Ghomi, Emilien 17 December 2012 (has links) (PDF)
As interactive systems are now used to perform a variety of complex tasks, users need systems that are at the same time expressive, efficient and usable. Although simple interactive systems can be easily usable, interaction designers often consider that only expert practitioners can benefit from the expressiveness of more complex systems. Our approach, inspired by studies in phenomenology and psychology, underscores that non-experts have sizeable knowledge and advanced skills related to various expert activities having a social dimension -such as artistic activities-, which they gain implicitly through their engagement as perceivers. For example, we identify various music-related skills mastered by non-musicians, which they gain when listening to music or attending performances. We have two main arguments. First, interaction designers can reuse such implicit knowledge and skills to design interaction techniques that are both expressive and usable by novice users. Second, as expert artifacts and expert learning methods have evolved over time and have shown efficient to overcome the complexity of expert activities, they can be used as a source of inspiration to make expressive systems more easily usable by novice users. We provide a design framework for studying the usability and expressiveness of interaction techniques as two new aspects of the user experience, and explore this framework with three projects. In the first project we study the use of rhythmic patterns as an input method, and show that novice users are able to reproduce and memorize large vocabularies of patterns. This is made possible by the natural abilities of non-musicians to perceive, reproduce and make sense of rhythmic structures. We define a method to create expressive vocabularies of patterns, and show that novice users are able to efficiently use them as command triggers. In the second project, we study the design and learning of chording gestures on multitouch screens. We introduce design guidelines to create expressive chord vocabularies taking the mechanical constraints and the degrees of freedom of the human hand into account. We evaluate the usability of such gestures in an experiment and we present an adapted learning method inspired by the teaching of chords in music. We show that novice users are able to reproduce and memorize our vocabularies of chording gestures, while our learning method can improve long-term memorization. The final project focuses on music software used for live performances and proposes a framework for designing "instrumental" software allowing expert musical playing and having its elementary functionalities accessible to novices, as it is the case with acoustic instruments (for example, one can easily play a few chords on a piano without practice). We define a design framework inspired by a functional decomposition of acoustic instruments and present an adapted software architecture, both aiming to ease the design of such software and to make it match with instrument-making. These projects show that, in these cases: (i) the implicit knowledge novices have about some expert activities can be reused for interaction; (ii) expert learning methods can inspire ways to make expressive systems more usable novices; (iii) taking expert artifacts as a source of inspiration can help creating usable and expressive interactive systems. In this dissertation, we propose the study of usability as an alternative to the focus on immediacy that characterizes current commercial interactive systems. We also propose methods to benefit from the richness of expert activities and from the implicit knowledge of non-experts to design interactive systems that are at the same time expressive and usable by novice users.
188

Children's Vocabulary Development : The role of parental input, vocabulary composition and early communicative skills

Cox Eriksson, Christine January 2014 (has links)
The aim of this thesis is to examine the early vocabulary development of a sample of Swedish children in relation to parental input and early communicative skills. Three studies are situated in an overall description of early language development in children. The data analyzed in the thesis was collected within a larger project at Stockholm University (SPRINT- “Effects of enhanced parental input on young children’s vocabulary development and subsequent literacy development” [VR 2008-5094]). Data analysis was based on parental report via SECDI, the Swedish version of the MacArthur-Bates Communicative Development Inventories, and audio recordings. One study examined parental verbal interaction characteristics in three groups of children with varying vocabulary size at 18 months. The stability of vocabulary development at 18 and 24 months was investigated in a larger study, with focus on children’s vocabulary composition and grammatical abilities. The third study examined interrelations among early gestures, receptive and productive vocabulary, and grammar measured with M3L, i.e. three longest utterances, from 12 to 30 months. Overall results of the thesis highlight the importance of early language development. Variability in different characteristics in parental input is associated with variability in child vocabulary size. Children with large early vocabularies exhibit the most stability in vocabulary composition and the earliest grammatical development. Children’s vocabulary composition may reflect individual stylistic variation. Use of early gestures is associated differentially with receptive and productive vocabulary. Results of the thesis have implications for parents, child- and healthcare personnel, as well as researchers and educational practitioners. The results underscore the importance of high quality in adult-child interaction, with rich input fine-tuned to children’s developmental levels and age, together with high awareness of early language development. / SPRINT project
189

Template-basierte Klassifikation planarer Gesten

Schmidt, Michael 09 July 2014 (has links) (PDF)
Pervasion of mobile devices led to a growing interest in touch-based interactions. However, multi-touch input is still restricted to direct manipulations. In current applications, gestural commands - if used at all - are only exploiting single-touch. The underlying motive for the work at hand is the conviction that a realization of advanced interaction techniques requires handy tools for supporting their interpretation. Barriers for own implementations of procedures are dismantled by providing proof of concept regarding manifold interactions, therefore, making benefits calculable to developers. Within this thesis, a recognition routine for planar, symbolic gestures is developed that can be trained by specifications of templates and does not imply restrictions to the versatility of input. To provide a flexible tool, the interpretation of a gesture is independent of its natural variances, i.e., translation, scale, rotation, and speed. Additionally, the essential number of specified templates per class is required to be small and classifications are subject to real-time criteria common in the context of typical user interactions. The gesture recognizer is based on the integration of a nearest neighbor approach into a Bayesian classification method. Gestures are split into meaningful, elementary tokens to retrieve a set of local features that are merged by a sensor fusion process to form a global maximum-likelihood representation. Flexibility and high accuracy of the approach is empirically proven in thorough tests. Retaining all requirements, the method is extended to support the prediction of partially entered gestures. Besides more efficient input, the possible specification of direct manipulation interactions by templates is beneficial. Suitability for practical use of all provided concepts is demonstrated on the basis of two applications developed for this purpose and providing versatile options of multi-finger input. In addition to a trainable recognizer for domain-independent sketches, a multi-touch text input system is created and tested with users. It is established that multi-touch input is utilized in sketching if it is available as an alternative. Furthermore, a constructed multi-touch gesture alphabet allows for more efficient text input in comparison to its single-touch pendant. The concepts presented in this work can be of equal benefit to UI designers, usability experts, and developers of feedforward-mechanisms for dynamic training methods of gestural interactions. Likewise, a decomposition of input into tokens and its interpretation by a maximum-likelihood matching with templates is transferable to other application areas as the offline recognition of symbols. / Obwohl berührungsbasierte Interaktionen mit dem Aufkommen mobiler Geräte zunehmend Verbreitung fanden, beschränken sich Multi-Touch Eingaben größtenteils auf direkte Manipulationen. Im Bereich gestischer Kommandos finden, wenn überhaupt, nur Single-Touch Symbole Anwendung. Der vorliegenden Arbeit liegt der Gedanke zugrunde, dass die Umsetzung von Interaktionstechniken mit der Verfügbarkeit einfach zu handhabender Werkzeuge für deren Interpretation zusammenhängt. Auch kann die Hürde, eigene Techniken zu implementieren, verringert werden, wenn vielfältige Interaktionen erprobt sind und ihr Nutzen für Anwendungsentwickler abschätzbar wird. In der verfassten Dissertation wird ein Erkenner für planare, symbolische Gesten entwickelt, der über die Angabe von Templates trainiert werden kann und keine Beschränkung der Vielfalt von Eingaben auf berührungsempfindlichen Oberflächen voraussetzt. Um eine möglichst flexible Einsetzbarkeit zu gewährleisten, soll die Interpretation einer Geste unabhängig von natürlichen Varianzen - ihrer Translation, Skalierung, Rotation und Geschwindigkeit - und unter wenig spezifizierten Templates pro Klasse möglich sein. Weiterhin sind für Nutzerinteraktionen im Anwendungskontext übliche Echtzeit-Kriterien einzuhalten. Der vorgestellte Gestenerkenner basiert auf der Integration eines Nächste-Nachbar-Verfahrens in einen Ansatz der Bayes\'schen Klassifikation. Gesten werden in elementare, bedeutungstragende Einheiten zerlegt, aus deren lokalen Merkmalen mittels eines Sensor-Fusion Prozesses eine Maximum-Likelihood-Repräsentation abgeleitet wird. Die Flexibilität und hohe Genauigkeit des statistischen Verfahrens wird in ausführlichen Tests nachgewiesen. Unter gleichbleibenden Anforderungen wird eine Erweiterung vorgestellt, die eine Prädiktion von Gesten bei partiellen Eingaben ermöglicht. Deren Nutzen liegt - neben effizienteren Eingaben - in der nachgewiesenen Möglichkeit, per Templates spezifizierte direkte Manipulationen zu interpretieren. Zur Demonstration der Praxistauglichkeit der präsentierten Konzepte werden exemplarisch zwei Anwendungen entwickelt und mit Nutzern getestet, die eine vielseitige Verwendung von Mehr-Finger-Eingaben vorsehen. Neben einem Erkenner trainierbarer, domänenunabhängiger Skizzen wird ein System für die Texteingabe mit den Fingern bereitgestellt. Anhand von Nutzerstudien wird gezeigt, dass Multi-Touch beim Skizzieren verwendet wird, wenn es als Alternative zur Verfügung steht und die Verwendung eines Multi-Touch Gestenalphabetes im Vergleich zur Texteingabe per Single-Touch effizienteres Schreiben zulässt. Von den vorgestellten Konzepten können UI-Designer, Usability-Experten und Entwickler von Feedforward-Mechanismen zum dynamischen Lehren gestischer Eingaben gleichermaßen profitieren. Die Zerlegung einer Eingabe in Token und ihre Interpretation anhand der Zuordnung zu spezifizierten Templates lässt sich weiterhin auf benachbarte Gebiete, etwa die Offline-Erkennung von Symbolen, übertragen.
190

Children's Vocabulary Development : The role of parental input, vocabulary composition and early communicative skills

Cox Eriksson, Christine January 2014 (has links)
The aim of this thesis is to examine the early vocabulary development of a sample of Swedish children in relation to parental input and early communicative skills. Three studies are situated in an overall description of early language development in children. The data analyzed in the thesis was collected within a larger project at Stockholm University (SPRINT- “Effects of enhanced parental input on young children’s vocabulary development and subsequent literacy development” [VR 2008-5094]). Data analysis was based on parental report via SECDI, the Swedish version of the MacArthur-Bates Communicative Development Inventories, and audio recordings. One study examined parental verbal interaction characteristics in three groups of children with varying vocabulary size at 18 months. The stability of vocabulary development at 18 and 24 months was investigated in a larger study, with focus on children’s vocabulary composition and grammatical abilities. The third study examined interrelations among early gestures, receptive and productive vocabulary, and grammar measured with M3L, i.e. three longest utterances, from 12 to 30 months. Overall results of the thesis highlight the importance of early language development. Variability in different characteristics in parental input is associated with variability in child vocabulary size. Children with large early vocabularies exhibit the most stability in vocabulary composition and the earliest grammatical development. Children’s vocabulary composition may reflect individual stylistic variation. Use of early gestures is associated differentially with receptive and productive vocabulary. Results of the thesis have implications for parents, child- and healthcare personnel, as well as researchers and educational practitioners. The results underscore the importance of high quality in adult-child interaction, with rich input fine-tuned to children’s developmental levels and age, together with high awareness of early language development. / SPRINT project

Page generated in 0.0525 seconds