• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 3
  • 1
  • Tagged with
  • 50
  • 44
  • 43
  • 43
  • 20
  • 18
  • 13
  • 10
  • 10
  • 9
  • 9
  • 8
  • 7
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

User modelling for knowledge sharing in e-mail communication

Kim, Sanghee January 2002 (has links)
This thesis addresses the problem of sharing and transferring knowledge within knowledge-intensive organisations from a user modelling perspective with the purpose of improving individual and group performance. It explores the idea of creating organisational environments from which any of the users involved can benefit by being aware of each other such that sharing expertise between those who are knowledge providers and those who are knowledge seekers can be maximised. In order to encourage individuals to share such valuable expertise, it also explores the idea of keeping a balance between ensuring the availability of information and the increase in user workloads due to the need to handle unwanted information. In an attempt to demonstrate the ideas mentioned above, this research examines the application of user modelling techniques to the development of communication-based task learning systems based on e-mail communication. The design rationale for using e-mail is that personally held expertise is often explicated through e-mail exchanges since it provides a good source for extracting user knowledge. The provision of an automatic message categorisation system that combines knowledge acquired from both statistical and symbolic text learning techniques is one of the three themes of this work. The creation of a new user model that captures the different levels of expertise reflected in exchanged e-mail messages, and makes use of them in linking knowledge providers and knowledge seekers is the second. The design of a new information distribution method to reduce both information overload and underload is the third.
22

Mechanism design for eliciting costly observations in next generation citizen sensor networks

Papakonstantinou, Athanasios January 2010 (has links)
Citizen sensor networks are open information systems in which members of the public act as information providers. The information distributed in such networks ranges from observations of events (e.g. noise measurements or monitoring of environmental parameters) to probabilistic estimates (e.g. projected traffic reports or weather forecasts). However, due to rapid advances in technology such as high speed mobile internet and sophisticated portable devices (from smart-phones to hand-held game consoles), it is expected that citizen sensor networks will evolve. This evolution will be driven by an increase in the number of information providers, since, in the future, it will be much easier to gather and communicate information at a large scale, which in turn, will trigger a transition to more commercial applications. Given this projected evolution, one key difference between future citizen sensor networks and conventional present ones is the emergence of self-interested behaviour, which can manifest in two main ways. First, information providers may choose to commit insufficient resources when producing their observations, and second, they may opt to misreport them. Both aspects of this self-interested behaviour are ignored in current citizen sensor networks. However, as their applications are broadened and commercial applications expand, information providers are likely to demand some kind of payment (e.g. real or virtual currency) for the information they provide. Naturally, those interested in buying this information, will also require guarantees of its quality. It is these issues that we deal with in this thesis through the introduction of a series of novel two-stage mechanisms, based on strictly proper scoring rules. We focus on strictly proper scoring rules, as they have been used in the past as a method of eliciting truthful reporting of predictions in various forecasting scenarios (most notably in weather forecasting). By using payments that are based on such scoring rules, our mechanisms effectively address the issue of selfish behaviour by motivating information providers in a citizen sensor network to, first, invest the resources required by the information buyer in the generation of their observations, and second, to report them truthfully. To begin with, we introduce a mechanism that allows the centre (acting as an information buyer) to select a single agent that can provide a costly observation at a minimum cost. This is the first time a mechanism has been derived for a setting in which the centre has no knowledge of the actual costs involved in the generation of the agents' observations. Building on this, we then make two further contributions to the state of the art, with the introduction of two extensions of this mechanism. First, we extend the mechanism so that it can be applied in a citizen sensor network where the information providers do not have the same resources available for the generation of their observations. These different capabilities are reflected in the quality of the provided observations. Hence, the centre must select multiple agents by eliciting their costs and the maximum precisions of their observations and then ask them to produce these observations. Second, we consider a setting where the information buyer cannot gain any knowledge of the actual outcome beyond what it receives through the agents' reports. Now, because the centre is not able to evaluate the providers' reported observations through external means, it has to rely solely on the reports it receives. It does this by fusing the reports together into one observation which then uses as a means to assess the reports of each of the providers. For the initial mechanism and each of the two extensions, we prove their economic properties (i.e. incentive compatibility and individual rationality) and then present empirical results comparing a number of specific scoring rules, which includes the quadratic, spherical, logarithmic and a parametric family of scoring rules. These results show that although the logarithmic scoring rule minimises the mean and variance of an agent's payment, using it may result in unbounded payments if an agent provides an observation of poor quality. Conversely, the payments of the parametric family exhibit finite bounds and are similar to those of the logarithmic rule for specific values of the parameter. Thus, we show that the parametric scoring rule is the best candidate in our setting. We empirically evaluate both extended mechanisms in the same way, and for the first extension, we show that the mechanism describes a family of possible ways to perform the agent selection, and that there is one that dominates all others. Finally, we compare both extensions with the peer prediction mechanism introduced by \cite{trustsr1} and show that in all three mechanisms the total expected payment is the same, while for both our mechanisms the variance in the total payment is significantly lower.
23

Generic security templates for information system security arguments : mapping security arguments within healthcare systems

He, Ying January 2014 (has links)
Industry reports indicate that the number of security incidents happened in healthcare organisation is increasing. Lessons learned (i.e. the causes of a security incident and the recommendations intended to avoid any recurrence) from those security incidents should ideally inform information security management systems (ISMS). The sharing of the lessons learned is an essential activity in the “follow-up” phase of security incident response lifecycle, which has long been addressed but not given enough attention in academic and industry. This dissertation proposes a novel approach, the Generic Security Template (GST), aiming to feed back the lessons learned from real world security incidents to the ISMS. It adapts graphical Goal Structuring Notations (GSN), to present the lessons learned in a structured manner through mapping them to the security requirements of the ISMS. The suitability of the GST has been confirmed by demonstrating that instances of the GST can be produced from real world security incidents of different countries based on in-depth analysis of case studies. The usability of the GST has been evaluated using a series of empirical studies. The GST is empirically evaluated in terms of its given effectiveness in assisting the communication of the lessons learned from security incidents as compared to the traditional text based approach alone. The results show that the GST can help to improve the accuracy and reduce the mental efforts in assisting the identification of the lessons learned from security incidents and the results are statistically significant. The GST is further evaluated to determine whether users can apply the GST to structure insights derived from a specific security incident. The results show that students with a computer science background can create an instance of the GST. The acceptability of the GST is assessed in a healthcare organisation. Strengths and weaknesses are identified and the GST has been adjusted to fit into organisational needs. The GST is then further tested to examine its capability to feed back the security lessons to the ISMS. The results show that, by using the GST, lessons identified from security incidents from one healthcare organisation in a specific country can be transferred to another and can indeed inform the improvements of the ISMS. In summary, the GST provides a unified way to feed back the lessons learned to the ISMS. It fosters an environment where different stakeholders can speak the same language while exchanging the lessons learned from the security incidents around the world.
24

Cultural mediators and the everyday making of 'digital capital' in contemporary Chile

Arriagada, Arturo January 2014 (has links)
This thesis studies processes of cultural mediation and the role of digital media within them. It is based on the experiences of a group of cultural mediators within a particular music scene in contemporary Chile, and focuses on actors’ meaningful repertoires of action, their material arrangements and their relation with information and communication technologies (ICTs). ‘Mediation’ in a broader sense means processes through which human and non-human agencies produce and shape meanings, attaching them to various cultural flows such as information, images, and identities. As cultural mediators, actors define the music scene, curating and circulating through digital media various flows which they deem worthy of being considered by audiences, and distinguishing themselves across different fields. The thesis is based on nine months of fieldwork (2011) in Santiago, following the everyday practices of the creators of eight music websites through which global and local cultural flows are mediated, organised, and circulated. It analyses how various technological devices facilitate individuals’ construction of networks where cultural flows circulate, and through which their uses of taste are displayed and objectified. It proposes the concept of ‘digital capital’ as an assemblage of actors, practices, objects, and meanings, which is convertible into other types of capital (e.g. economic) and exchangeable in various fields. It is a mode of practice and expertise through which, using digital technologies, individuals create networks where cultural flows circulate. Through the making of websites, music fans become cultural mediators, developing their digital capital as cultural and technical expertise. This expertise is convertible into economic capital and positionality across different fields, especially the field of advertising. Digital capital can be summarised in the question: ‘what are the connections and associations between technical knowledge, cultural flows, and social position, as well as conversions of capital, behind someone who is using Twitter or Facebook, or making a website about a music scene?’ Against this backdrop, it is explored how actors produce and perform ‘cultures of mediation’, commoditising culture as consumption goods.
25

Human information processing based information retrieval

Graf, Erik January 2011 (has links)
This work focused on the investigation of the question how the concept of relevance in Information Retrieval can be validated. The work is motivated by the consistent difficulties of defining the meaning of the concept, and by advances in the field of cognitive science. Analytical and empirical investigations are carried out with the aim of devising a principled approach to the validation of the concept. The foundation for this work was set by interpreting relevance as a phenomenon occurring within the context of two systems: An IR system and the cognitive processing system of the user. In light of the cognitive interpretation of relevance, an analysis of the learnt lessons in cognitive science with regard to the validation of cognitive phenomena was conducted. It identified that construct validity constitutes the dominant approach to the validation of constructs in cognitive science. Construct validity constitutes a proposal for the conduction of validation in scenarios, where no direct observation of a phenomenon is possible. With regard to the limitations on direct observation of a construct (i.e. a postulated theoretic concept), it bases validation on the evaluation of its relations to other constructs. Based on the interpretation of relevance as a product of cognitive processing it was concluded, that the limitations with regard to direct observation apply to its investigation. The evaluation of its applicability to an IR context, focused on the exploration of the nomological network methodology. A nomological network constitutes an analytically constructed set of constructs and their relations. The construction of such a network forms the basis for establishing construct validity through investigation of the relations between constructs. An analysis focused on contemporary insights to the nomological network methodology identified two important aspects with regard to its application in IR. The first aspect is given by a choice of context and the identification of a pool of candidate constructs for the inclusion in the network. The second consists of identifying criteria for the selection of a set of constructs from the candidate pool. The identification of the pertinent constructs for the network was based on a review of the principles of cognitive exploration, and an analysis of the state of the art in text based discourse processing and reasoning. On that basis, a listing of known sub-processes contributing to the pertinent cognitive processing was presented. Based on the identification of a large number of potential candidates, the next step consisted of the inference of criteria for the selection of an initial set of constructs for the network. The investigation of these criteria focused on the consideration of pragmatic and meta-theoretical aspects. Based on a survey of experimental means in cognitive science and IR, five pragmatic criteria for the selection of constructs were presented. Consideration of meta-theoretically motivated criteria required to investigate what the specific challenges with regard to the validation of highly abstract constructs are. This question was explored based on the underlying considerations of the Information Processing paradigm and Newell’s (1994) cognitive bands. This led to the identification of a set of three meta-theoretical criteria for the selection of constructs. Based on the criteria and the demarcated candidate pool, an IR focused nomological network was defined. The network consists of the constructs of relevance and type and grade of word relatedness. A necessary prerequisite for making inferences based on a nomological network consists of the availability of validated measurement instruments for the constructs. To that cause, two validation studies targeting the measurement of the type and grade of relations between words were conducted. The clarification of the question of the validity of the measurement instruments enabled the application of the nomological network. A first step of the application consisted of testing if the constructs in the network are related to each other. Based on the alignment of measurements of relevance and the word related constructs it was concluded to be true. The relation between the constructs was characterized by varying the word related constructs over a large parameter space and observing the effect of this variation on relevance. Three hypotheses relating to different aspects of the relations between the word related constructs and relevance. It was concluded, that the conclusive confirmation of the hypotheses requires an extension of the experimental means underlying the study. Based on converging observations from the empirical investigation of the three hypotheses it was concluded, that semantic and associative relations distinctly differ with regard to their impact on relevance estimation.
26

Mediated transparency : truth, truthfulness, and rightness in digital healthcare discourse

Blackett, Nina Jane January 2013 (has links)
This thesis addresses the challenges of producing digitally mediated healthcare information, a high-stakes arena which is conceptualised as a complex discourse and its diverse producers as interlocutors within this discourse. The study is located theoretically in the tradition of universal or formal pragmatics, the foundation of Habermas’s theory of communicative action. Building on this theoretical core a conceptual framework is developed that integrates insight from several other traditions, including communication studies. The notion of communicative transparency is aligned with the idealised goal of a rich informational context supporting a range of perspectives in movement towards a balanced and consensual understanding by lay and expert actors of healthcare in our world. The central research question is: Can digital mediation increase the transparency of healthcare communication? The empirical focus rests on two organisations involved in the creation of digital information products. Key mediators of meaning in digital healthcare information are identified as the diverse types of expertise of its producers, the materiality of digital artefacts, and the communicative mechanisms, processes and practices that often lead to departures from the normative idealised standard of transparency. The methodology is a comparative case analysis based on field research employing principally interviews to build a rich corpus, analysed using a recursive in-depth thematic coding procedure to reveal the ways in which digitally mediated healthcare meanings are shaped and shared. The study demonstrates how communicative transparency emerges from shared frames of reference and common models of communication. It is concluded that digital mediation can indeed increase the transparency of healthcare information by supporting the deepening of Habermasian rational discourse, providing that validity claims to truth, truthfulness, and rightness can be raised and resolved at all stages in the discourse among all interlocutors, whatever their role and status.
27

Chord sequence patterns in OWL

Wissmann, Jens January 2012 (has links)
This thesis addresses the representation of, and reasoning on, musical knowledge in the Semantic Web. The Semantic Web is an evolving extension of the World Wide Web that aims at describing information that is distributed on the web in a machine-processable form. Existing approaches to modelling musical knowledge in the context of the Semantic Web have focused on metadata. The description of musical content and reasoning as well as integration of content descriptions and metadata are yet open challenges. This thesis discusses the possibilities of representing musical knowledge in the Web Ontology Language (OWL) focusing on chord sequence representation and presents and evaluates a newly developed solution. The solution consists of two main components. Ontological modelling patterns for musical entities such as notes and chords are introduced in the (MEO) ontology. A sequence pattern language and ontology (SEQ) has been developed that can express patterns in a form resembling regular expressions. As MEO and SEQ patterns both rewrite to OWL they can be combined freely. Reasoning tasks such as instance classification, retrieval and pattern subsumption are then executable by standard Semantic Web reasoners. The expressiveness of SEQ has been studied, in particular in relation to grammars. The complexity of reasoning on SEQ patterns has been studied theoretically and empirically, and optimisation methods have been developed. There is still great potential for improvement if specific reasoning algorithms were developed to exploit the sequential structure, but the development of such algorithms is outside the scope of this thesis. MEO and SEQ have also been evaluated in several musicological scenarios. It is shown how patterns that are characteristic of musical styles can be expressed and chord sequence data can be classified, demonstrating the use of the language in web retrieval and as integration layer for different chord patterns and corpora. Furthermore, possibilities of using SEQ patterns for harmonic analysis are explored using grammars for harmony; both a hybrid system and a translation of limited context-free grammars into SEQ patterns have been developed. Finally, a distributed scenario is evaluated where SEQ and MEO are used in connection with DBpedia, following the Linked Data approach. The results show that applications are already possible and will benefit in the future from improved quality and compatibility of data sources as the Semantic Web evolves.
28

A distributed instrumentation system for the acquisition of rich, multi-dimensional datasets from railway vehicles

Stewart, Edward James Charles January 2012 (has links)
This thesis presents work carried out over a number of years within the field of railway vehicle instrumentation. The railway industry is currently moving to be more heavily “data driven”. This means that railway organisations are putting policies into place whereby decisions have to be justified based on recorded and citable data. To achieve this, the railway industry is increasingly turning to greater and greater levels of instrumentation to deliver the data on which to base these decisions. This thesis considers not only this increased requirement for data, but the frameworks and systems that must be put into place in order first to obtain it, and then to extract useful information from it. In particular the author considers the issue of contextualisation of data, where multiple datastreams may be used to provide context for, or allow more accurate and beneficial interpretation of each other in order to support better decision making. In order to obtain this data, the thesis explores, through a series of case studies, a number of options for different instrumentation system architectures. This culminates in the development of a distributed system of embedded processors arranged in an extensible modular framework to provide a rich, coherent and integrated dataset which can then be processed contextually to yield a better understanding of the railway system.
29

Being objective : communities of practice and the use of cultural artefacts in digital learning environments

Hopes, David January 2014 (has links)
Over the past decade there has been a dramatic increase in the volume of digital content created from museum, library and archive collections but research on how this material is actually used, particularly in digital learning environments, has fallen far behind the rate of supply. In order to address this gap, this thesis examines how communities of practice (CoPs) involved in the supply and use of digital artefacts in the Higher Education sector in the UK interact with content and what factors affect this process. It focuses on a case study involving the digitisation of Shakespeare collections used in postgraduate research, and the testing of use in a range of different learning environments. This produced a number of significant findings with implications for the HE and cultural sectors. Firstly, similar patterns of artefact use were found across all users suggesting there are generic ways in which everyone interacts with digital artefacts. However, distinct forms of use did emerge which correspond with membership of particular communities of practice. Secondly, members of a CoP appear to share a particular learning style and this is influenced by the learning environment. Finally, the research indicates that a mixed method for analysing and measuring use, piloted and tested in the case study, is possible.
30

The natural history and management of vestibular schwannomas

Martin, Thomas Peter Cutlack January 2012 (has links)
Over the past decade (2000-), the management of vestibular schwannomas has been in a state of flux. An increasing availability of magnetic resonance imaging has allowed clinicians to monitor tumour progression and increasingly, it has become recognised that once diagnosed, a significant proportion of lesions do not continue to grow. As a result, a number of neurotological centres have advocated conservative management as appropriate for small-medium sized tumours. Birmingham has been one of these centres, and this thesis presents data gathered over the past fifteen years that reflects this change in management, drawing upon the Birmingham Vestibular Schwannoma Database maintained by the author. The thesis addresses issues pertinent to conservative management: growth rates among observed tumours, risk factors for growth, the evolution of hearing while under observation and proposes a radiological surveillance protocol. More broadly, the thesis examines other themes important in the management of patients with vestibular schwannomas: the role of functional surgery and the possibility of rehabilitation in single-sided deafness. A number of chapters from the thesis have been published in peer-reviewed journals and are presented here in updated or amended form.

Page generated in 0.0786 seconds