• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 15
  • 10
  • 10
  • Tagged with
  • 156
  • 23
  • 19
  • 13
  • 12
  • 10
  • 8
  • 8
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Everything integrated : a framework for associative writing in the Web

Miles-Board, Timothy J. January 2004 (has links)
Hypermedia is the vision of the complete integration of all information in any media, including text, image, audio and video. The depth and diversity of the World-Wide Web, the most successful and farthest-reaching hypermedia system to date, has tremendous potential to provide such an integrated docuverse. This thesis explores the issues and challenges surrounding the realisation of this potential through the process of Associative Writing - the authoring and publishing of integrated hypertexts which connect a writer's new contributions to the wider context of relevant existing material. Through systematically examining archived Web pages and carrying out a real-world case study, this work demonstrates that Associative Writing is an important and valid process, and furthermore that there is (albeit limited) evidence that some writers are adopting Associative Writing strategies in the Web. However, in investigating the issues facing these writers, five core challenges have been identified which may be barriers to a more widespread adoption of Associative Writing: (1) the lost in hyperspace problem, (2) legal issues over deep linking to copyrighted material, (3) the limitations of the Web hypertext model, (4) Web link integrity, and (5) that popular word-processor based Web writing tools do not adequately support each of the writing activities involved in Associative Writing. In response to these challenges, this thesis introduces the Associative Writing Framework, building on open hypertext, Semantic Web, hypertext writing, and hypertext annotation work to provide a novel interface for supporting browsing, reading, annotation, linking, and integrated writing. Although conceived in terms of supporting a generic Associative Writing scenario, the framework has been applied to the specific domain of intertextual dance analysis in order to carry out a focused evaluation. Initial indications are that the framework method is valid, and that continued work to promote and evaluate its more general applicability is worthwhile.
62

Using linked data in purposive social networks

Singh, Priyanka January 2016 (has links)
The Web has provided a platform for people to collaborate by using collective intelligence. Messaging boards, Q&A forums are some examples where people broadcast their issues and other people provide solutions. Such communities are defined as a Purposive Social Network (PSN) in this thesis. PSN is a community where people with similar interest and varied expertise come together, use collective intelligence to solve common problems in the community and build tools for common purpose. Usually, Q&A forums are closed or semi-open. The data are controlled by the websites. Diculties in the search and discovery of information is an issue. People searching for answers or experts in a website can only see results from its own network, while losing a whole communityof experts in other websites. Another issue in Q&A forums is not getting any response from the community. There is a long tail of questions that get no answer. The thesis introduces the Suman system that utilises Semantic Web (SW) and Linked Data technologies to solve above challenges. SW technologies are used to structure the community data so it can be decentralized and used across platforms. Linked Data helps to find related information about linked resources. The Suman system uses available tools to solve name entity disambiguation problem and add semantics to the PSN data. It uses a novel combination of semantic keyword search with traditional text search techniquesto nd similar questions with answers for unanswered questions to expand thequery term with added semantics and uses corwdsourced data to rank the results. Furthermore, the Suman system also recommends experts who can answer those questions. This helps to narrow down the long tail of unanswered questions in such communities. The Suman system is designed using the Design Science methodology and evaluated by users in two experiments. The results were statistically analysed to show that the keywords generated by the Suman system were rated higher than the original keywords from the websites. It also showed that the participants agreed with the algorithm rating for answers provided by the Suman system. StackOverflow and Reddit are used as an example of PSN and to build an application as a proof of concept of the Suman system.
63

Traditional amateur video producers' use of the Internet : making connections in a complex and contested environment

Hondros, John James January 2013 (has links)
The Internet has been adopted as a video distribution technology by different categories of amateur video producers who were using other distribution methods prior to its advent. I conducted a one-year ethnographic study of amateur producers from three such categories (public access television producers, video activists, and film and television fans) to understand their reasons for this adoption, how they used this technology, and the interactions with their audiences that followed from its use, analysing my findings within a new materialist framework. I found that the producers had a diverse set of reasons for going online and that these largely depended on their specific circumstances, and on how they saw the online environment in relation to their overall objectives as video makers. These circumstances and objectives also meant that some producers resisted going online at all, or used the technology in a restricted way, and that traditional distribution methods continued to exist in some form alongside the Internet-based ones. The producers assembled together different people and technologies to distribute their videos, which was often a complex and contested process, typically resulting in distribution assemblages that were precarious and that required on-going maintenance. These assemblages used a wide variety of technological components, selected for a broad range of reasons, which also largely reflected the specific circumstances and objectives of the producers. I also found that the producers varied considerably in their attitude towards audience engagement, as well as in the methods they used to achieve it, and in the success of those methods. Some were in fact indifferent to it, while others considered it a critical part of their activities. While some were successful in producing sustained interactions with their audiences, others failed to do so. These findings enrich and problematize our current understanding of this emergent phenomenon.
64

Reading in Web-based hypertexts : cognitive processes strategies and reading goals

Protopsaltis, Aristidis January 2006 (has links)
Hypertext is a multi-linear electronic, textual and interactive environment to present information. The objective of such an environment is that readers may browse through linked, cross-referenced, annotated texts in a multi-sequential manner, and thus, it is believed, to improve the learning. However, early and current research findings have revealed some mixed results concerning the alleged advantage of hypertext on learning over paper-based documents. Researchers have identified the lack of research about the cognitive processes and the strategies that readers use during reading as one of the main factors for such results. As a result, there is a need and scope for further research in modelling the cognitive processes involved in reading comprehension and the reading strategies in a hypertext environment. This research addresses some of the gaps in the field by proposing a model that represents the sequence of events that take place during reading in a Web-based hypertext environment. Also, emphasis is placed on the strategies that readers use during hypertext reading and on the potential effect of different reading goals on reading comprehension. The evaluation of the model and the other hypotheses is conducted in two experiments using qualitative and quantitative methods. The first experiment employs the think aloud method. Forty two subjects participated. The results demonstrated that the proposed model precisely describes the sequence of events that take place during hypertext reading. They did not reveal any significant difference between different reading goals and understanding. They revealed four reading strategies: serial, serial overview, mixed, and mixed overview, and they identified three factors that influence the selection of hyperlinks: coherence, link location, and personal interest. The second experiment is an independent samples design experiment with ninety subjects. The results confirmed those found in the first experiment. The current study makes a contribution in the field of hypertext reading by proposing and evaluating a procedural model and by making this model graphic. By doing so it addresses some of the voids in the field, expands our understanding of the reading processes and the reading strategies, and provides practical guideliness which are enhanced to promote design supporting effective learning processes.
65

Retrieval of multimedia information : simulation of a proposed system

Wall, Raymond Alwyn January 1972 (has links)
A design for an automated information retrieval system has been evolved, and a manual simulation of it tested. The main experimental aspects are: (a) adding thesaurus terms into a hierarchical classification, to accomplish more versatile search capabilities than are possible with existing 'thesaurus-only' or 'classification-only' systems; (b) specifying a computer system which could produce its own manual auxiliary retrieval package to perform a high proportion of searches without recourse to the main computer system.
66

A textual transmission model of readership and hypertext

Rowberry, Simon January 2014 (has links)
Since the turn of the millennium, hypertext (most popularly known as links on the World Wide Web) has become a banal part of our everyday life and has been largely neglected in scholarly discourse. As digital textual media becomes more versatile and re-usable in a variety of contexts, hypertext once more has become an important facet in digital design but this time as part of the reception of text rather than a foundational part of the text’s composition. The current project proposes a framework for understanding the recent transformation of hypertext through the Literary Web hourglass model, which posits that hypertext does not exist as a textual artefact, but rather as a trace of the processes of composition and reception. The Literary Web offers a toolkit for the Analysis of literary texts through both a book historical and close reading perspective. This is demonstrated through a reading of Vladimir Nabokov’s Pale Fire, a foundational work of hypertext fiction. Through reference to some playful examples of contemporary digital literature, termed the hypertext circus, the current project concludes by suggesting ways in which receptional forms of hypertext can be used to create a more open and creative form of hypertext.
67

Human motion description in multimedia database

Cheng, Fangxiang January 2004 (has links)
Information retrieval from multimedia databases has become an urgent problem. Its solution can be facilitated by describing the content of multimedia databases using a variety of ways. In a video database, the options can be caption, speech, audio, image features etc. Presently, the MPEG-7 framework deals with standardisation of the multimedia content description techniques. Image features, such as motion, colour, texture and shape, are used for image annotation. The research described here is concerned with the annotation of sports video as part of an EU project ASS AVID. The framework of ASSAVID is similar to MPEG-7. The focus of the research is to develop motion feature descriptors. Motion description becomes increasingly attractive because motion features encapsulate temporal information. However, problems plaguing low-level motion processing impede the research on high-level motion analysis. This becomes more severe in applications with real-life video. In our research, human motion is adopted for sports annotation because sports involve a number of human behaviours. Human motion analysis has a wide spectrum of applications, such as surveillance, medical imaging and information retrieval. Yet there are no techniques directly related to this topic in MPEG-7. One of the useful descriptor of complex human motion is motion periodicity. However, among the existing techniques, only a few successful attempts at periodic motion description have been reported in real-life video. In this thesis, we present a novel method for sports video retrieval using periodic motion features. We focus on modelling human motion and this is accomplished by solving several sub-problems: A novel non-rigid foreground moving object detection algorithm is developed for complex real-life video. The algorithm is used to process low- level motion and segment out the human body from images with least computational expense. Innovative sport templates are constructed for human behaviour description using periodic motion features. They represent sport types in ASSAVID. Motion feature vectors are built using the templates. Motion feature classification is accomplished using a neural network. The proposed method has been tested on the ASSAVID database, which contains more than 800 minutes of real-life video from the BBC 1992 Barcelona Olympic Games. In total about 810,000 images have been processed to test motion features. Four types of different sports are tested. The experimental results show the proposed method to be successful.
68

Integrating speech and visual text in multimodal interfaces

Shmueli, Yael January 2005 (has links)
This work systematically investigates when and how combining speech output and visual text may facilitate processing and comprehension of sentences. It is proposed that a redundant multimodal presentation of speech and text has the potential for improving sentence processing but also for severely disrupting it. The effectiveness of the presentation is assumed to depend on the linguistic complexity of the sentence, the memory demands incurred by the selected multimodal configuration and the characteristics of the user. The thesis employs both theoretical and empirical methods to examine this claim. At the theoretical front, the research makes explicit features of multimodal sentence presentation and of structures and processes involved in multimodal language processing. Two entities are presented: a multimodal design space (MMDS) and a multimodal user model (MMUM). The dimensions of the MMDS include aspects of (i) the sentence (linguistic complexity, c.f., Gibson, 1991), (ii) the presentation (configurations of media), and (iii) user cost (a function of the first two dimensions). The second entity, the MMUM, is a cognitive model of the user. The MMUM attempts to characterise the cognitive structures and processes underlying multimodal language processing, including the supervisory attentional mechanisms that coordinate the processing of language in parallel modalities. The model includes an account of individual differences in verbal working memory (WM) capacity (c.f. Just and Carpenter, 1992) and can predict the variation in the cognitive cost experienced by the user when presented with different contents in a variety of multimodal configurations. The work attempts to validate through 3 controlled studies with users the central propositions of the MMUM. The experimental findings indicate the validity of some features of the MMUM but also the need for further refinement. Overall, they suggest that a durable text may reduce the processing cost of demanding sentences delivered by speech, whereas adding speech to such sentences when presented visually increases processing cost. Speech can be added to various visual forms of text only if the linguistic complexity of the sentence imposes a low to moderate load on the user. These conclusions are translated to a set of guidelines for effective multimodal presentation of sentences. A final study then examines the validity of some of these guidelines in an applied setting. Results highlight the need for an enhanced experimental control. However, they also demonstrate that the approach used in this research can validate specific assumptions regarding the relationship between cognitive cost, sentence complexity and multimodal configuration aspects and thereby to inform the design process of effective multimodal user interfaces.
69

The relationship between media quality and user cost in networked multimedia applications

Wilson, Gillian May January 2006 (has links)
The research reported in this thesis assesses the impact of media quality degradations in Internet multimedia conferencing on users. Low quality audio and video can be experienced, therefore it is important to determine the minimum levels of quality needed to perform specific tasks. This has most commonly been investigated using subjective measures, however the research reported in this thesis adopted a 3-factor evaluation framework of task performance, user satisfaction and user cost. User satisfaction was measured subjectively, whereas physiological indicators of perceptual strain were utilised to measure user cost. Physiological measures provide continuous data throughout a session, are not subject to cognitive mediation and taking such measurements does not interfere with the user's task. Five experiments were performed investigating audio and video quality degradations. With the exception of one passive listening task, all tasks used were based on remote interviews, as they fully exploit the capabilities of the application. Results showed that physiological responses to media quality degradations can be detected in passive, perceptual tasks. However, active participation in a task made it more difficult to detect changes due to quality degradations. In all experiments physiological measures gave information on the nature of the tasks being performed and effects of variables such as order. The results of this research were then used in three further experiments in the areas of VR and web quality of service and design. In conclusion, the physiological measures utilised in the research reported in this thesis can be employed to assess the impact of media quality degradations in passive perceptual tasks and to give general information about the nature of the task being performed.
70

Supporting webpage revisiting with history data and visualization

Van, Trien Do January 2013 (has links)
This research addresses the general topic of “keeping found things found” by investigating difficulties people encounter when revisiting webpages. The overall aim of the research is to design, develop and evaluate a web history tool that addresses these difficulties. An empirical study has been conducted. Participants recorded their web navigation for three months using a Firefox add-on. Each participant then took part in a controlled laboratory experiment, to revisit webpages they had visited neither frequently (on only one day) nor recently (1 week or 1 month ago). Ten underlying causes of failure were discovered. Overall, 61% of the failures occurred when the target page: 1) had originally been accessed via search results; 2) was on a topic a participant often looked at; or 3) was on a known but large website. Based on the findings of the empirical study, a new visualization history tool which supports people in revisiting webpages has been designed and developed as an add-on for Firefox. The tool has two main novel aspects. Firstly, by providing different navigation techniques, it enables users to revisit webpages within their long-term web history. Secondly, the visualization presentation is created based on the user’s navigational paths (even crossing different tabs) rather than the chronology which webpages were visited. Evidence about the benefits of the visualization history tool has been provided through a three month field study. The results showed that such a history tool solved the identified causes of failure and helped participants succeed on 96% of revisiting occasions. They particularly used the tool to revisit webpages which had been visited neither frequently and nor recently. Participants often took only 3 steps to revisit a webpage. Overall, they were satisfied with the tool and rated it 4.1/5.0, and 84% of them wanted to keep using the tool after the evaluation.

Page generated in 0.3359 seconds