1 |
Contingent workflow modelling for a didactic approach to project management in media content productionFuschi, David Luigi January 2014 (has links)
This thesis is motivated by the steep increase in grass-root content production and the transformation of Web2.0 consumers into "prosumers", a concept that pre-dated the web itself (Toffler 1980). The notion of "prosumers" in the Web 2.0 and beyond presumes an increasingly wider-scale ability for content creation with a much deeper understanding of the implications and associated risks (at all levels from quality to IPR and copyright aspects). Technology today offers the possibility to easily master complex processes such as video/image editing with a home computer or a laptop yet this is not sufficient for managing all the decision points involved in an informed fashion. The widespread availability of office-automation solutions powerful enough to handle fairly complex processes of monitoring and management, raises the research question as to the feasibility of providing a didactic model and support tools that could better serve the increased desire of web users to become content producers. Accordingly this thesis reports on the research to assess the extent to which the complex "creative media content production process" can be described and formalised in terms of models of interacting processes and constraints that can be integrated within a model-based and data-driven decision support system to serve media content creation and production management for non-professional users working within an office automation computing environment. The study concluded that the hypothesis is feasible across the core common processes of media production in general but lower level support would require additional user input to provide information on the specific application scenario parametrics such as actor's (artistic) preferences, resources and constraints as may be particularised for various media production sets/genres/goals etc. and this can form the scope for future work based on this study. Additionally this study concluded that using Petri nets in order to understand the internal logic of the processes usefully allowed the decoupling of each process from the actors involved thus highlighting the input / output / constraints / risks-set affecting each node. This made it possible to verify pre-requisites and conditions easily as well as to have a clear indication of the factors determining the successful completion of an action; accordingly related risks and available choice space were highlighted thus facilitating informed decision making and clear understanding of process complexity along with its potential points of failure.
|
2 |
Model-based testing using visual contractsKhan, Tamim Ahmed January 2012 (has links)
Web services only expose interface level information, abstracting away implementation details. Testing is a time consuming and resource-intensive activity. Therefore, it is important to minimize the set of test cases executed without compromising quality. Since white-box testing techniques and traditional structural coverage criteria require access to code, we require a model-based approach for web service testing. Testing relies on oracles to provide expected outcomes for test cases and, if implemented manually, they depend on testers’ understanding of functional requirements to decide the correct response of the system on every given test case. As a result, they are costly in creation and maintenance and their quality depends on the correct interpretation of the requirements. Alternatively, if suitable specifications are available, oracles can be generated automatically at lower cost and with better quality. We propose to specify service operations as visual contracts with executable formal specifications as rules of a typed attributed graph transformation system. We associate operation signatures with these rules for providing test oracles. We analyze dependencies and conflicts between visual contracts to develop a dependency graph. We propose model-based coverage criteria, considering this dependency graph, to assess the completeness of test suites. We also propose a mechanism to find out which of the potential dependencies and the conflicts were exercised by a given test case. While executing the tests, the model is simulated and coverage is recorded as well as measured against the criteria. The criteria are formalized and the dynamic detection of conflicts and dependencies is developed. This requires keeping track of occurrences and overlaps of pre- and post-conditions, their enabling and disabling, in successive model states, and interpreting these in terms of the static dependency graph. Systems evolve over time and need retesting each time there is a change. In order to verify that the quality of the system is maintained, we use regression testing. Since regression test suites tend to be large, we isolate the affected part in the system only retesting affected parts by rerunning a selected subset of the total test suite. We analyze the test cases that were executed on both versions and propose a mechanism to transfer the coverage provided by these test cases. This information helps us to assess the completeness of the test suite on the new version without executing all of it.
|
3 |
Using the World Wide Web as a source of research information : experience and perceptions of the web amongst re-searchers in the social sciencesRobinson, Michael Robert Owen January 2012 (has links)
This study examines researchers' behaviour, experience and perceptions concerning - their use of the Web as a source of research information. The study focuses on the extent to which researchers make use of the Web for finding research information, the way in which they approach the search task, and their perceptions of the reliability, trustworthiness and legitimacy of material retrieved from the Web, through in-depth interviews with a number of active social science researchers from Hong Kong's higher education institutions. The study concludes that researchers have incorporated reference to the Web into their typical information seeking behaviour, alongside the use of other resources such as electronic journals and full-text databases. However, the Web has not displaced library sponsored electronic resources to any significant degree, with researchers still depending on their availability for the compilation of literature reviews and other information gathering tasks. Researchers often adopt simplistic and somewhat spontaneous strategies to searching for information on the Web, and most researchers settle for a quick scan of only the highest ranked results from a search. Despite the relatively casual approach to searching for information, researchers nonetheless applied quite rigorous standards of critical evaluation to material which they discovered on the Web. Lacking the supposedly built-in assurances of quality of published peer reviewed literature, researchers sought other cues to determine the value of otherwise un cited material, and were to some extent hyper-critical in their attitudes towards the use of material from the Web, when compared with other sources. This study suggests that a number of opportunities exist for academic libraries to engage in the research process, in terms of training, product familiarization, and the organization and quality assurance of Web resources.
|
4 |
Modelling dynamic and contextual user profiles for personalized servicesHawalah, Ahmad January 2012 (has links)
During the last few years the Internet and the WWW have become a major source of information as well as an essential platform for mass media, communication, e-commerce and entertainment. This expansion has led to information overload so finding or searching for relevant information has become more and more challenging. Personalization and recommender systems have been widely used during the past few years to overcome this information overload problem. The main objective of these systems is to learn user interests and then provide a personalized experience to each user accordingly. However; as information on the WWVV increases, so do users' demands: web personalization systems need to provide users not only with recommendations for relevant information, but also provide these recommendations in the right situation. However, when examining the current works in the personalization field, we can see that there is a limitation in providing a generic personalization system that can model dynamic and contextual profiles to provide more intelligent personalized services. Most of the current systems are not able to adapt to user frequent changing behaviours, and ignore the fact that users might have different preferences in different situations and contexts. Aiming to address these limitations in current personalization systems, this thesis focuses on the aspects of modelling conceptual user profiles that are dynamic and contextual in a content-based platform. The novelty is in the way that these profiles are learnt, adapted, exploited and integrated to infer not just highly relevant items, but also provide such items in the right situation.
|
5 |
Enhancing folksonomies to improve information sharing and searchAwawdeh, Ruba January 2011 (has links)
Folksonomies are social collaborative systems which represent a method of self- organisation, where users save their electronic resources online and create personal metadata (tags) to describe them. Users can share their resources with other users creating social networks between them. Recently folksonomies have spread widely and rapidly on the World Wide Web, and the number of web sites which employ folksonomies is increasing every day. The user's tags are freely chosen words which are not restricted to any controlled vocabulary rules. As a consequence they suffer from certain drawbacks (e.g. misspelling, synonymy and polysemy) which indicate that user-created tags, from the point of view of searching and usability, cannot be fully reliable. Many research projects have been carried out in the area of folksonomies and their usability, but there is little work to date focusing on searching within folksonomies. This thesis analyses the quality of user-created tags and examines how their quality can be enhanced for searching and sharing purposes. It makes a number of contributions to the field of folksonomies including: a suggested improvement to address the problem of limited quality and quantity of the user-created tags; and a prototype to enhance the user- created tags and overcome some of its limitation with automatically extracted tag sets which will lead to an improvement in search capability within folksonomy systems. Controlled experiments have been employed to determine the effectiveness of the prototype. The experiments were carried out to examine if the search results were more relevant to the user's query when using the enhanced tag set than the search results provided by the user-created tag set alone. The results of the experiment indicate that the relevancy of the search results were improved when using the enhanced tag set.
|
6 |
Link services for linked dataYang, Yang January 2014 (has links)
This thesis investigated the concept of building link services as an extension of Linked Data to improve its navigability (thus improving the linking of the Web of Linked Data). The study first considered the Semantic Web URI and how an agent understands what a URI refers to when dereferencing it. As a result, a generic URI dereferencing algorithm was designed which can be used by any agent to consume Linked Data. The navigability of the Web of Linked Data was then defined - how an agent can follow the links to discover more data. To understand how the Web of Linked Data is connected, this study found 425 million across-datasets URIs (URIs link two different datasets and enable discoverability between datasets) on the Linked Data cloud and only 7.5% of resources are linked to non-local datasets. To improve the navigability of the Web of Linked Data, a list of link services was built. These link services are RESTful services, and takes a link as input and provides a RDF document as output with linking information of the requested URIs. They are: resolution service (retrieves the RDF description of the requested URI for agents), Link extraction service (extracts URIs from a RDF), Linkbase service (third party hosting link relations between datasets, especially for those data which were not originally linked), Reasoning service (applies rules of reasoning to generate a new RDF), Composition service (compose multiple RDF documents into one documents), and Link injection service (inject extra links relations into the client requested RDF document). To use link services, it is almost always requires multiple requests from the clients. Thus, to make the service transparent to the clients and to enable clients to orchestrate link services easily, a link service proxy was built that can be used from the client side with any Linked Data application. When clients request a URI via HTTP, the proxy injects link relations to the requested RDF documents on the fly, hence augmenting Linked Data. The link service proxy was evaluated using four services we built during the enAKTing project: PSI backlink service, sameAs co-reference service, geo-reasoning services, and a link injection service. This work showed that these services alone added 373 million across datasets foreign URIs, which almost doubles the previously mentioned 7.5% across-datasets foreign URIs coverage to the 14%. We also demonstrated how the linked service proxy works dynamically with the Web browser to enrich the Web of Linked Data. As all link services can be easily reused, and programmed to navigate theWeb of Linked Data as well as generating new link services, we believe this provides a basis for agents to consume Linked Data. Following this trend, the Linked Data consumers will only need to orchestrate or create the link services to consume the Web of Linked Data. Any other Web-based Linked Data applications can be understood as specialised services to be built on top of the link services.
|
7 |
Towards an understanding of Web growth : an empirical study of socio-technical web activity of Open Government DataTinati, Ramine January 2013 (has links)
This thesis proposes a new interdisciplinary approach to understanding how the World Wide Web is growing, as a socio technical network, co-constructed by interrelationships between society and technological developments. The thesis uses a longitudinal empirical case study of Web and offline activity surrounding the UK Open Government Data communityto explore the Web as a socio-technical `networks of networks'. It employs a mixed methods framework, underpinned by sociological theory but also drawing on computer science for technical approaches to the problem of understanding theWeb. The study uses quantitative and qualitative sources of data in a novel analysis of online and offline activities to explore the formation and growth of UK Open Government Data and to understand this case, and the Web itself. The thesis argues that neither technology nor `the social' alone is sufficient to explain the growth of this network, or indeed the Web, but that these networks develop out of closely co-constructed relationships and interactions between humans and technology. This thesis has implications not only for how the Web is understood, but for the kinds of future technological design and social activity that will be implicated in its continued growth.
|
8 |
Semantic technologies : from niche to the mainstream of Web 3? : a comprehensive framework for web information modelling and semantic annotationDotsika, Fefie January 2012 (has links)
Context: Web information technologies developed and applied in the last decade have considerably changed the way web applications operate and have revolutionised information management and knowledge discovery. Social technologies, user-generated classification schemes and formal semantics have a far-reaching sphere of influence. They promote collective intelligence, support interoperability, enhance sustainability and instigate innovation. Contribution: The research carried out and consequent publications follow the various paradigms of semantic technologies, assess each approach, evaluate its efficiency, identify the challenges involved and propose a comprehensive framework for web information modelling and semantic annotation, which is the thesis’ original contribution to knowledge. The proposed framework assists web information modelling, facilitates semantic annotation and information retrieval, enables system interoperability and enhances information quality. Implications: Semantic technologies coupled with social media and end-user involvement can instigate innovative influence with wide organisational implications that can benefit a considerable range of industries. The scalable and sustainable business models of social computing and the collective intelligence of organisational social media can be resourcefully paired with internal research and knowledge from interoperable information repositories, back-end databases and legacy systems. Semantified information assets can free human resources so that they can be used to better serve business development, support innovation and increase productivity.
|
9 |
Data extraction & semantic annotation from web query result pagesAnderson, Neil David Alan January 2016 (has links)
Our unquenchable thirst for knowledge is one of the few things that really defines our humanity. Yet the Information Age, which we have created, has left us floating aimlessly in a vast ocean of unintelligible data. Hidden Web databases are one massive source of structured data. The contents of these databases are, however, often only accessible through a query proposed by a user. The data returned in these Query Result Pages is intended for human consumption and, as such, has nothing more than an implicit semantic structure which can be understood visually by a human reader, but not by a computer. This thesis presents an investigation into the processes of extraction and semantic understanding of data from Query Result Pages. The work is multi-faceted and includes at the outset, the development of a vision-based data extraction tool. This work is followed by the development of a number of algorithms which make use of machine learning-based techniques first to align the data extracted into semantically similar groups and then to assign a meaningful label to each group. Part of the work undertaken in fulfilment of this thesis has also addressed the lack of large, modern datasets containing a wide range of result pages representing of those typically found online today. In particular, a new innovative crowdsourced dataset is presented. Finally, the work concludes by examining techniques from the complementary research field of Information Extraction. An initial, critical assessment of how these mature techniques could be applied to this research area is provided.
|
10 |
Engineering a Semantic Web trust infrastructureCobden, Marcus January 2014 (has links)
The ability to judge the trustworthiness of information is an important and challenging problem in the field of Semantic Web research. In this thesis, we take an end-to-end look at the challenges posed by trust on the Semantic Web, and present contributions in three areas: a Semantic Web identity vocabulary, a system for bootstrapping trust environments, and a framework for trust aware information management. Typically Semantic Web agents, which consume and produce information, are not described with sufficient information to permit those interacting with them to make good judgements of trustworthiness. A descriptive vocabulary for agent identity is required to enable effective inter agent discourse, and the growth of trust and reputation within the Semantic Web; we therefore present such a foundational identity ontology for describing web-based agents. It is anticipated that the Semantic Web will suffer from a trust network bootstrapping problem. In this thesis, we propose a novel approach which harnesses open data to bootstrap trust in new trust environments. This approach brings together public records published by a range of trusted institutions in order to encourage trust in identities within new environments. Information integrity and provenance are both critical prerequisites for well-founded judgements of information trustworthiness. We propose a modification to the RDF Named Graph data model in order to address serious representational limitations with the named graph proposal, which affect the ability to cleanly represent claims and provenance records. Next, we propose a novel graph based approach for recording the provenance of derived information. This approach offers computational and memory savings while maintaining the ability to answer graph-level provenance questions. In addition, it allows new optimisations such as strategies to avoid needless repeat computation, and a delta-based storage strategy which avoids data duplication.
|
Page generated in 0.0613 seconds