• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 15
  • 10
  • 10
  • Tagged with
  • 156
  • 23
  • 19
  • 13
  • 12
  • 10
  • 8
  • 8
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

A full-scale semantic content-based model for interactive multimedia information systems

Agius, Harry Wayne January 1997 (has links)
Issues of syntax have dominated research in multimedia information systems (MMISs), with video developing as a technology of images and audio as one of signals. But when we use video and audio, we do so for their content. This is a semantic issue. Current research in multimedia on semantic content-based models has adopted a structure-oriented approach, where video and audio content is described on a frame-by-frame or segment-by-segment basis (where a segment is an arbitrary set of contiguous frames). This approach has failed to cater for semantic aspects, and thus has not been fully effective when used within an MMIS. The research undertaken for this thesis reveals seven semantic aspects of video and audio: (1) explicit media structure; (2) objects; (3) spatial relationships between objects; (4) events and actions involving objects; (5) temporal relationships between events and actions; (6) integration of syntactic and semantic information; and (7) direct user-media interaction. This thesis develops a full-scale semantic content-based model that caters for the above seven semantic aspects of video and audio. To achieve this, it uses an entities of interest approach, instead of a structure-oriented one, where the MMIS integrates relevant semantic content-based information about video and audio with information about the entities of interest to the system, e.g. mountains, vehicles, employees. A method for developing an interactive MMIS that encompasses the model is also described. Both the method and the model are used in the development of ARISTOTLE, an interactive instructional MMIS for teaching young children about zoology, in order to demonstrate their operation.
32

Investigation of the relationship between aesthetics and perceived usability in web pages

Kominis, Raphael January 2014 (has links)
The main hypothesis of the thesis is that between two systems identical in functionality and usability, differences in aesthetics may positively influence users perceived usability. To date, a narrow focus on the engineering aspects of aesthetics has adversely affected the scope and success of experiments, therefore previous work in the field needed to be revisited. The thesis reviews literature and theory in usability and aesthetics, the latter from the point of the view of philosophy, theory, and application. It also explores the relationship between aesthetics, usability and user engagement; discusses a distinct new trend research that identifies a link between beauty and perceived usability of website interaction; and develops a pilot for an experimental methodology. Based on conclusions from the review of the field of usability, two experiments where designed and carried out, an independent measures and repeated measures. The findings of these experiments confirmed the hypothesis that perceived usability was positively influenced by higher aesthetics.
33

An association rule dynamics and classification approach to event detection and tracking in Twitter

Adedoyin-Olowe, Mariam January 2015 (has links)
Twitter is a microblogging application used for sending and retrieving instant on-line messages of not more than 140 characters. There has been a surge in Twitter activities since its launch in 2006 as well as steady increase in event detection research on Twitter data (tweets) in recent years. With 284 million monthly active users Twitter has continued to grow both in size and activity. The network is rapidly changing the way global audience source for information and influence the process of journalism [Newman, 2009]. Twitter is now perceived as an information network in addition to being a social network. This explains why traditional news media follow activities on Twitter to enhance their news reports and news updates. Knowing the significance of the network as an information dissemination platform, news media subscribe to Twitter accounts where they post their news headlines and include the link to their on-line news where the full story may be found. Twitter users in some cases, post breaking news on the network before such news are published by traditional news media. This can be ascribed to Twitter subscribers' nearness to location of events. The use of Twitter as a network for information dissemination as well as for opinion expression by different entities is now common. This has also brought with it the issue of computational challenges of extracting newsworthy contents from Twitter noisy data. Considering the enormous volume of data Twitter generates, users append the hashtag (#) symbol as prefix to keywords in tweets. Hashtag labels describe the content of tweets. The use of hashtags also makes it easy to search for and read tweets of interest. The volume of Twitter streaming data makes it imperative to derive Topic Detection and Tracking methods to extract newsworthy topics from tweets. Since hashtags describe and enhance the readability of tweets, this research is developed to show how the appropriate use of hashtags keywords in tweets can demonstrate temporal evolvements of related topic in real-life and consequently enhance Topic Detection and Tracking on Twitter network. We chose to apply our method on Twitter network because of the restricted number of characters per message and for being a network that allows sharing data publicly. More importantly, our choice was based on the fact that hashtags are an inherent component of Twitter. To this end, the aim of this research is to develop, implement and validate a new approach that extracts newsworthy topics from tweets' hashtags of real-life topics over a specified period using Association Rule Mining. We termed our novel methodology Transaction-based Rule Change Mining (TRCM). TRCM is a system built on top of the Apriori method of Association Rule Mining to extract patterns of Association Rules changes in tweets hashtag keywords at different periods of time and to map the extracted keywords to related real-life topic or scenario. To the best of our knowledge, the adoption of dynamics of Association Rules of hashtag co-occurrences has not been explored as a Topic Detection and Tracking method on Twitter. The application of Apriori to hashtags present in tweets at two consecutive period t and t + 1 produces two association rulesets, which represents rules evolvement in the context of this research. A change in rules is discovered by matching every rule in ruleset at time t with those in ruleset at time t + 1. The changes are grouped under four identified rules namely 'New' rules, 'Unexpected Consequent' and 'Unexpected Conditional' rules, 'Emerging' rules and 'Dead' rules. The four rules represent different levels of topic real-life evolvements. For example, the emerging rule represents very important occurrence such as breaking news, while unexpected rules represents unexpected twist of event in an on-going topic. The new rule represents dissimilarity in rules in rulesets at time t and t+1. Finally, the dead rule represents topic that is no longer present on the Twitter network. TRCM revealed the dynamics of Association Rules present in tweets and demonstrates the linkage between the different types of rule dynamics to targeted real-life topics/events. In this research, we conducted experimental studies on tweets from different domains such as sports and politics to test the performance effectiveness of our method. We validated our method, TRCM with carefully chosen ground truth. The outcome of our research experiments include: Identification of 4 rule dynamics in tweets' hashtags namely: New rules, Emerging rules, Unexpected rules and 'Dead' rules using Association Rule Mining. These rules signify how news and events evolved in real-life scenario. Identification of rule evolvements on Twitter network using Rule Trend Analysis and Rule Trace. Detection and tracking of topic evolvements on Twitter using Transaction-based Rule Change Mining TRCM. Identification of how the peculiar features of each TRCM rules affect their performance effectiveness on real datasets.
34

A formal model for personalities, adaptive hyperlink-based systems

Ohene-Djan, James Francis January 2000 (has links)
The attraction of hyperlink-based interaction as a model for information retrieval has long been recognised and has increased in popularity with the mainstream emergence of largescale hypermedia systems such as the World-Wide Web (WWW). For hypermedia systems to realise their full potential, researchers have postulated that such systems should exhibit sophisticated, knowledge-based personalisation and adaptation (P&A) features, without which users’ information retrieval goals are less likely to be achieved. As a result of these postulations, personalisable, adaptive hyperlink-based systems (PA-HLBSs) have arisen as a new topic of hypermedia research. This dissertation contributes a novel abstract approach to the formal characterisation of the interaction process which takes place between the user of a hyperlink-based system (HLBS) and the system itself. This research addresses the issue of how hyperlink-based systems can be endowed with features which enable the personalisation and adaptation of the interaction process. This research also addresses the specific issue of how to characterise precisely the emergent properties of HLBSs and thereby make possible a systematic, principled and exhaustive elicitation of the space of possible P&A actions. The approach is unique in formally modelling a rich set of abstract user-initiated P&A actions which enable individual users to come closer to satisfying their specific, and often dynamic, information retrieval goals. Furthermore, the model indicates how systeminitiated P&A actions fit cohesively and non-disruptively with user-initiated ones. The model proposed is descriptive, rather than prescriptive, and is cast at a level of abstraction above that of concrete systems exploring current technologies. The model aims to be the foundation for a systematic investigation of the nature, scope and effects of user and system-initiated tailoring actions on HLBSs for information retrieval. Such an approach, it is hoped, will allow for user and system-initiated P&A actions to be studied with greater conceptual clarity than is possible with technology-driven experimentation. The dissertation also contains a brief overview of PAS, a personalisable HLBS which instantiates the major aspects of the proposed model, thereby substantiating the claim that the abstract approach taken allows not only for a greater understanding of what personalisation and adaptivity means in the context of HLBSs, but also how the model may aid the design of such systems.
35

An investigation into dynamic web service composition using a simulation framework

Yousif-Mohammad, Khalid Mirghnee January 2013 (has links)
[Motivation] Web Services technology has emerged as a promising solution for creat- ing distributed systems with the potential to overcome the limitation of former distrib- uted system technologies. Web services provide a platform-independent framework that enables companies to run their business services over the internet. Therefore, many techniques and tools are being developed to create business to business/business to customer applications. In particular, researchers are exploring ways to build new services from existing services by dynamically composing services from a range of resources. [Aim] This thesis aims to identify the technologies and strategies cur- rently being explored for organising the dynamic composition of Web services, and to determine how extensively each of these has been demonstrated and assessed. In addition, the thesis will study the matchmaking and selection processes which are essential processes for Web service composition. [Research Method] We under- took a mapping study of empirical papers that had been published over the period 2000 to 2009. The aim of the mapping study was to identify the technologies and strategies currently being explored for organising the composition of Web services, and to determine how extensively each of these has been demonstrated and assessed. We then built a simulation framework to carry out some experiments on composition strategies. The rst experiment compared the results of a close replication of an ex- isting study with the original results in order to evaluate our close replication study. The simulation framework was then used to investigate the use of a QoS model for supporting the selection process, comparing this with the ranking technique in terms of their performance. [Results] The mapping study found 1172 papers that matched our search terms, from which 94 were classied as providing practical demonstration of ideas related to dynamic composition. We have analysed 68 of these in more detail. Only 29 provided a `formal' empirical evaluation. From these, we selected a `baseline' study to test our simulation model. Running the experiments using simulated data- sets have shown that in the rst experiment the results of the close replication study and the original study were similar in terms of their prole. In the second experiment, the results demonstrated that the QoS model was better than the ranking mechanism in terms of selecting a composite plan that has highest quality score. [Conclusions] No one approach to service composition seemed to meet all needs, but a number has been investigated more. The similarity between the results of the close replication and the original study showed the validity of our simulation framework and a proof that the results of the original study can be replicated. Using the simulation it was demonstrated that the performance of the QoS model was better than the ranking mechanism in terms of the overall quality for a selected plan. The overall objectives of this research are to develop a generic life-cycle model for Web service composition from a mapping study of the literature. This was then used to run simulations to replicate studies on matchmaking and compare selection methods.
36

Semantic multimedia modelling & interpretation for search & retrieval

Aslam, Nida January 2011 (has links)
With the axiomatic revolutionary in the multimedia equip devices, culminated in the proverbial proliferation of the image and video data. Owing to this omnipresence and progression, these data become the part of our daily life. This devastating data production rate accompanies with a predicament of surpassing our potentials for acquiring this data. Perhaps one of the utmost prevailing problems of this digital era is an information plethora. Until now, progressions in image and video retrieval research reached restrained success owed to its interpretation of an image and video in terms of primitive features. Humans generally access multimedia assets in terms of semantic concepts. The retrieval of digital images and videos is impeded by the semantic gap. The semantic gap is the discrepancy between a user's high-level interpretation of an image and the information that can be extracted from an image's physical properties. Content- based image and video retrieval systems are explicitly assailable to the semantic gap due to their dependence on low-level visual features for describing image and content. The semantic gap can be narrowed by including high-level features. High-level descriptions of images and videos are more proficient of apprehending the semantic meaning of image and video content. It is generally understood that the problem of image and video retrieval is still far from being solved. This thesis proposes an approach for intelligent multimedia semantic extraction for search and retrieval. This thesis intends to bridge the gap between the visual features and semantics. This thesis proposes a Semantic query Interpreter for the images and the videos. The proposed Semantic Query Interpreter will select the pertinent terms from the user query and analyse it lexically and semantically. The proposed SQI reduces the semantic as well as the vocabulary gap between the users and the machine. This thesis also explored a novel ranking strategy for image search and retrieval. SemRank is the novel system that will incorporate the Semantic Intensity (SI) in exploring the semantic relevancy between the user query and the available data. The novel Semantic Intensity captures the concept dominancy factor of an image. As we are aware of the fact that the image is the combination of various concepts and among the list of concepts some of them are more dominant then the other. The SemRank will rank the retrieved images on the basis of Semantic Intensity. The investigations are made on the LabelMe image and LabelMe video dataset. Experiments show that the proposed approach is successful in bridging the semantic gap. The experiments reveal that our proposed system outperforms the traditional image retrieval systems.
37

Semantic multimedia modelling & interpretation for annotation

Ullah, Irfan January 2011 (has links)
The emergence of multimedia enabled devices, particularly the incorporation of cameras in mobile phones, and the accelerated revolutions in the low cost storage devices, boosts the multimedia data production rate drastically. Witnessing such an iniquitousness of digital images and videos, the research community has been projecting the issue of its significant utilization and management. Stored in monumental multimedia corpora, digital data need to be retrieved and organized in an intelligent way, leaning on the rich semantics involved. The utilization of these image and video collections demands proficient image and video annotation and retrieval techniques. Recently, the multimedia research community is progressively veering its emphasis to the personalization of these media. The main impediment in the image and video analysis is the semantic gap, which is the discrepancy among a user's high-level interpretation of an image and the video and the low level computational interpretation of it. Content-based image and video annotation systems are remarkably susceptible to the semantic gap due to their reliance on low-level visual features for delineating semantically rich image and video contents. However, the fact is that the visual similarity is not semantic similarity, so there is a demand to break through this dilemma through an alternative way. The semantic gap can be narrowed by counting high-level and user-generated information in the annotation. High-level descriptions of images and or videos are more proficient of capturing the semantic meaning of multimedia content, but it is not always applicable to collect this information. It is commonly agreed that the problem of high level semantic annotation of multimedia is still far from being answered. This dissertation puts forward approaches for intelligent multimedia semantic extraction for high level annotation. This dissertation intends to bridge the gap between the visual features and semantics. It proposes a framework for annotation enhancement and refinement for the object/concept annotated images and videos datasets. The entire theme is to first purify the datasets from noisy keyword and then expand the concepts lexically and commonsensical to fill the vocabulary and lexical gap to achieve high level semantics for the corpus. This dissertation also explored a novel approach for high level semantic (HLS) propagation through the images corpora. The HLS propagation takes the advantages of the semantic intensity (SI), which is the concept dominancy factor in the image and annotation based semantic similarity of the images. As we are aware of the fact that the image is the combination of various concepts and among the list of concepts some of them are more dominant then the other, while semantic similarity of the images are based on the SI and concept semantic similarity among the pair of images. Moreover, the HLS exploits the clustering techniques to group similar images, where a single effort of the human experts to assign high level semantic to a randomly selected image and propagate to other images through clustering. The investigation has been made on the LabelMe image and LabelMe video dataset. Experiments exhibit that the proposed approaches perform a noticeable improvement towards bridging the semantic gap and reveal that our proposed system outperforms the traditional systems.
38

Abstraction, refinement and concurrent reasoning

Raad, Azalea January 2016 (has links)
This thesis explores the challenges in abstract library specification, library refinement and reasoning about fine-grained concurrent programs. For abstract library specification, this thesis applies structural separation logic (SSL) to formally specify the behaviour of several libraries in an abstract, local and compositional manner. This thesis further generalises the theory of SSL to allow for library specifications that are language-independent. Most notably, we specify a fragment of the Document Object Model (DOM) library. This result is compelling as it significantly improves upon existing DOM formalisms in that the specifications produced are local, compositional and language-independent. Concerning library refinement, this thesis explores two existing approaches to library refinement for separation logic, identifying their advantages and limitations in different settings. This thesis then introduces a hybrid approach to refinement, combining the strengths of both techniques for simple scalable library refinement. These ideas are then adapted to refinement for SSL by presenting a JavaScript implementation of the DOM fragment studied and establishing its correctness with respect to its specification using the hybrid refinement approach. As to concurrent reasoning, this thesis introduces concurrent local subjective logic (CoLoSL) for compositional reasoning about fine-grained concurrent programs. CoLoSL introduces subjective views, where each thread is verified with respect to a customised local view of the state, as well as the general composition and framing of interference relations, allowing for better proof reuse.
39

Investigation of the role of service level agreements in Web service quality

Soomro, Aijaz Ahmed January 2016 (has links)
Context/Background: Use of Service Level Agreements (SLAs) is crucial to provide the value added services to consumers to achieve their requirements successfully. SLAs also ensure the expected Quality of Service to consumers. Aim: This study investigates how efficient structural representation and management of SLAs can help to ensure the Quality of Service (QoS) in Web services during Web service composition. Method: Existing specifications and structures for SLAs for Web services do not fully formalize and provide support for different automatic and dynamic behavioral aspects needed for QoS calculation. This study addresses the issues on how to formalize and document the structures of SLAs for better service utilization and improved QoS results. The Service Oriented Architecture (SOA) is extended in this study with addition of an SLAAgent, which helps to automate the QoS calculation using Fuzzy Inference Systems, service discovery, service selection, SLA monitoring and management during service composition with the help of structured SLA documents. Results: The proposed framework improves the ways of how to structure, manage and monitor SLAs during Web service composition to achieve the better Quality of Service effectively and efficiently. Conclusions: To deal with different types of computational requirements the automation of SLAs is a challenge during Web service composition. This study shows the significance of the SLAs for better QoS during composition of services in SOA.
40

Enhancing distributed real-time collaboration with automatic semantic annotation

Juby, Benjamin Paul January 2005 (has links)
Distributed real-time collaboration, such as group-to-group videoconferencing, is becoming increasingly popular. However, this form of collaboration tends to be less effective than co-located interactions and there is a significant body of research that has sought to improve the collaboration technology through a variety of methods. Some of this research has focused on adding annotations that explicitly represent events that take place during the course of a collaboration session. While this approach shows promise, existing work has in general lacked high-level semantics, which limits the scope for automated processing of these annotations. Furthermore, the systems tend not to work in real-time and therefore only provide benefit during the replay of recorded sessions. The systems also often require significant effort from the session participants to create the annotations. This thesis presents a general-purpose framework and proof of concept implementation for the automated, real-time annotation of live collaboration sessions. It uses technologies from the Semantic Web to introduce machine-processable semantics. This enables inference to be used to automatically generate annotations by inferring high-level events from basic events captured during collaboration sessions. Furthermore, the semantic approach allows the framework to support a high level of interoperability, reuse and extensibility. The real-time nature of the framework means that the annotations can be displayed to meeting participants dUling a live session, which means that they can directly be of benefit during the session as well as being archived for later indexing and replay of a session recording. The semantic annotations are authored in RDF (Resource Description Framework) and are compliant to an OWL (Web Ontology Language) ontology. Both these languages are World Wide Web Consortium (W3C) recommendations. The framework uses rule-based inference combined with knowledge from an external triplestore to generate the annotations. A shared buffer called a tuple space is used for sharing these annotations between distributed sites. The proof of concept implementation uses existing Access Grid videoconferencing technology as an example application domain, to which speaker identification and participant tracking are added as examples of semantic annotations.

Page generated in 0.0392 seconds