• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 27
  • 2
  • 2
  • 1
  • Tagged with
  • 110
  • 110
  • 110
  • 78
  • 30
  • 15
  • 12
  • 10
  • 8
  • 7
  • 6
  • 6
  • 6
  • 6
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Modelling music : a theoretical approach to the classification of notated Western art music

Lee, Deborah January 2017 (has links)
The classification of notated Western art music is a perennial issue. This thesis analyses and models the knowledge organization of notated Western art music in order to elucidate a theoretical understanding of these classification issues and to offer new ways of viewing music classification in the future. This thesis also considers how music classification contributes to developments in general knowledge organization and compares the classification of Western art music across the library and information science (LIS) and music domains. The research is conducted using a number of analytical techniques, including examining music knowledge organization discourse, analysing examples of LIS classification schemes, unpicking discussions of classification in the music domain and analysing composer worklists in the music domain. After ascertaining how music classification fits into theories of faceted classification, three important facets of music are identified: medium, form and genre, and a quasi-facet of function. These three facets are explored in detail over five chapters: the binary vocal/instrumental categorisation; classifying numbers of instruments or voices, accompaniment, arrangements and “extreme” mediums; classifying musical instruments; classifying musical forms and genres; and the quasi-facet of function. Five resulting models of music classification are presented. Model 1 demonstrates the complexities of classifying musical medium, including the interlinked relationships between different parts of musical medium. Model 2 offers a solution to LIS classification’s largely binary view of vocal and instrumental categorisation by suggesting a novel new category: “vocinstrumental”. Model 3 illuminates the entrenched dependencies between facets of music, highlighting one of the structural issues with LIS classifications of music. Model 4 offers an original structure of music classification, proposing a simultaneous faceted and genre-based system. Model 5 compares classification in the music and LIS domains, offering a novel way of considering domain-based classification by codifying various types of relationships between the LIS and domain classifications. This thesis also contributes to the theory and practice of knowledge organization in general through the development of novel frameworks and methodologies to analyse classification schemes: the multiplane approach, reception-infused analysis, webs of Wirkungs (connections) between classification schemes and stress-testing.
62

Theory and practice in the analysis of information policy in the digital age : a case study on the formulation of the European Directive on the legal protection of databases

Turner, Paul January 1999 (has links)
This thesis is concerned with the academic study of information policy and aims to improve theoretical and methodological approaches for the analysis of complex information policy environments. In conducting a casestudy on the formulation of the European directive on the legal protection of databases. up to its adoption in March 1996. the research aims to explore the ways in which copyright and information issues were framed. and solutions shaped by the process of formulating policy responses to them at the European level. At the substantive level the research examines the legal issues arising in the protection of databases in Europe and describes and explains the role of human. organisational and contextual factors in shaping the content of the directive as finally adopted. At the methodological level the research examines the utility of a re-interpreted process model of policy-making for providing a coherent framework within which to conduct analysis of this complex information policy issue. At the theoretical level the research aims to use the casestudy findings to generate insights for the academic study of complex (European) information policy environments. The literature review begins by examining the development of information policy and considers the main problems that have inhibited the development of a coherent approach to information policy studies from within the information science tradition. It examines the reinterpreted process model of policy-making and presents it as a heuristic device with which to conduct the casestudy. The literature review also examines in detail the development of copyright policy at the European level and identifies the expansion of protection that has taken place. In particular. the impact of digital information and communication technologies on copyright regimes is considered. The literature review also outlines the emergence of the European Union(EU). and considers how the EU has shaped the characteristics of. and interactions between policy actors operating in the European policy-making environment. The casestudy analysis is conducted in two parts consisting of a detailed analysis of documentary evidence and forty in-depth semi-structured interviews with policy actors directly involved in the formulation of the directive. In deploying the re-interpreted process model the analysis is divided into two overlapping phases linked by the publication of the Commission's formal directive proposal in 1992. To ensure that the casestudy findings can be used in a more generalisable manner the analysis addresses the links between the formulation of the database directive and the wider context of European copyright and information policy-making in the digital age. Following the documentary and interview analysis the research findings are discussed and interpreted. The thesis concludes that at a substantive level the formulation of European copyright policy is problematic and tends towards a strengthening of protection in favour of right shoulders. In the digital environment the implications of this for other areas of information policy are also shown to be of concern. At the methodological level the re-interpreted process model is highlighted as useful in sensitising analysis to sources of complexity in the formulation process and for providing a coherent framework within which to study them. At the theoretical level the thesis enhances understanding of (European) information policy processes and provides some useful insights for academic information policy studies.
63

The introduction of knowledge management technology within the British Council : an action research study

Venters, Will J. January 2003 (has links)
The study describes action research undertaken within the Knowledge Management programme of the British Council, a not-for-profit multinational organisation. An interpretive methodology is adopted because of its appropriateness to the study of real-life complex situations. There is a contested literature on Knowledge Management which this study explores and contributes too. The action research draws on a social constructivist stance to develop and introduce Knowledge Management systems for significant groups within the organisation. A rich set of issues emerge from the literature, and the action research, which contribute to the discourse on Knowledge Management systems and their use in practice. The study suggests that a methodological framework is beneficial in supporting the development and introduction of such systems. However the research identified that Knowledge Management problems cannot be identified and so reconceptualises Knowledge Management in terms of improvement. A framework is developed (AFFEKT: Appreciative Framework for Evolving Knowledge Technologies) to such improvement. This framework is used in the final action research cycle. The conclusions are drawn from a reflection on the application of this framework and reflection on broader issues raised by the action research. The study concludes that knowledge management systems should introduced through an ongoing iterative process of reflection and action. Knowledge Management systems should encourage new work practices, however this requires a realisation that the development of a Knowledge Management systems is a reflective process by which the system is integrated into existing practice and enables users to critique this practice. The study contributes to the discourse concerning the application of technology within Knowledge Management (Galliers 1999; Alavi and Leidner 2001; Butler 2002; Wickramasinghe 2002). It contributes to the field of Information Systems by describing a coherent narrative on the introduction of knowledge management systems within a unique organisational context, and by developing a framework to aid intervention.
64

Efficient representation and matching of texts and images in scanned book collections

Yalniz, Ismet Zeki 01 January 2014 (has links)
Millions of books from public libraries and private collections have been scanned by various organizations in the last decade. The motivation is to preserve the written human heritage in electronic format for durable storage and efficient access. The information buried in these large book collections has always been of major interest for scholars from various disciplines. Several interesting research problems can be defined over large collections of scanned books given their corresponding optical character recognition (OCR) outputs. At the highest level, one can view the entire collection as a whole and discover interesting contextual relationships or linkages between the books. A more traditional approach is to consider each scanned book separately and perform information search and mining at the book level. Here we also show that one can view each book as a whole composed of chapters, sections, paragraphs, sentences, words or even characters positioned in a particular sequential order sharing the same global context. The information inherent in the entire context of the book is referred to as "global information" and it is demonstrated by addressing a number of research questions defined for scanned book collections. The global sequence information is one of the different types of global information available in textual documents. It is useful for discovering content overlap and similarity across books. Each book has a specific flow of ideas and events which distinguishes it from other books. If this global order is changed, then the flow of events and consequently the story changes completely. This argument is true across document translations as well. Although the local order of words in a sentence might not be preserved after translation, sentences, paragraphs, sections and chapters are likely to follow the same global order. Otherwise the two texts are not considered to be translations of each other. A global sequence alignment approach is therefore proposed to discover the contextual similarity between the books. The problem is that conventional sequence alignment algorithms are slow and not robust for book length documents especially with OCR errors, additional or missing content. Here we propose a general framework which can be used to efficiently align and compare the textual content of the books at various coarseness levels and even across languages. In a nut-shell, the framework uses the sequence of words which appear only once in the entire book (referred to as "the sequence of unique words") to represent the text. This representation is compact and it is highly descriptive of the content along with the global word sequence information. It is shown to be more accurate compared to the state of the art for efficiently i) detecting which books are partial duplicates in large scanned book collections (DUPNIQ), and, ii) finding which books are translations of each other without explicitly translating the entire texts using statistical machine translation approaches (TRANSNIQ). Using the global order of unique words and their corresponding positions in the text, one can also generate the complete text alignment efficiently using a recursive approach. The Recursive Text Alignment Scheme (RETAS) is several orders of magnitude faster than the conventional sequence alignment approaches for long texts and it is later used for iii) the automatic evaluation of OCR accuracy of books given the OCR outputs and the corresponding electronic versions, iv) mapping the corresponding portions of the two books which are known to be partial duplicates, and finally it is generalized for v) aligning long noisy texts across languages (Recursive Translation Alignment - RTA). Another example of the global information is that books are mostly printed in a single global font type. Here we demonstrate that the global font feature along with the letter sequence information can be used for facilitating and/or improving text search in noisy page images. There are two contributions in this area: (vi) an efficient word spotting framework for searching text in noisy document images, and, (vii) a state of the art dependence model approach to resolve arbitrary text queries using visual features. The effectiveness of these approaches is demonstrated for books printed in different scripts for which there is no OCR engine available or the recognition accuracy is low.
65

Cerebral palsy, online social networks and change

Lewis, Makayla January 2013 (has links)
In 2011, 19.2 million households in the United Kingdom had access to the Internet. Online social networks (OSN) such as Facebook, Twitter, MySpace, Bebo and YouTube have proved to be the most popular Internet activity (Office of National Statistics, 2011). 49% of these users have updated or created an OSN profile and are making over 24 million visits a month (Dutton, 2009). These websites are often directed at a broad market i.e. people without disabilities. Unfortunately people with disabilities, especially those with physical impairments, often have a greater risk of experiencing loneliness than people without a disability as a result of their mobility, access and or communication impairments. Conventional communication methods such as face-to-face communication, telephone communication and text message communication are often difficult to use and can limit the opportunities for people with disabilities to engage in successful socialisation with family members and friends (Braithwaiteet al, 1999). Therefore people with disabilities can often see online communication, especially OSNs, as an attractive alternative. Previous studies such as Braithwaite et al(1999), Ellis and Kent (2010) and Dobransky and Hargittai (2006) suggests that OSNs are opening a new world to individuals with disabilities. They help these individuals, especially those exhibiting lifelong physical challenges to carry out social interaction which they would otherwise not be able to do within the analogue world. However due to inaccessible features presented in the technology for example features requiring JavaScript, hard-coded text size and Captcha (AbilityNet, 2008; Cahill and Hollier, 2009 andAsuncion, 2010) access to OSNs is often difficult. The overarching purpose of this PhD research is to understand the experiences and challenges faced when people with the physical disability cerebral palsy (cp) use OSNs. It is estimated that 1 in 400 children born in the UK is affected by cp (Scope Response, 2007). The disability can present itself in a variety of ways and to varying degrees. There is no cure for cp, however management to increase social interaction especially through technological innovations is often encouraged (United Cerebral Palsy, 2001; Sharan, 2005 and Colledge, 2006). Previous studies such as AbilityNet (2008), Cahill and Hollier (2009), and Boudreau (2011) have explored mainstream OSNs use from the perspective of users with disabilities, i.e. blind and visually or cognitively impaired, but have placed great emphasis on investigating inaccessibility of OSNs without involving these users. Other studies such as Manna (2005) and Belchiorb et al (2005) have used statistical methods such as surveys and questionnaires to identify Internet use among people with unspecified disabilities. Conversely Asuncion (2010) has taken a broader approach involving OSN users using high-level taxonomies to classify their disabilities, and Marshall et al (2006) focused on a specific disability type, cognitive impairments, without considering the variety of limitations present within the disability. Other studies such as Pell (1999) have taken a broader yet more specific approach and looked at technology use, especially computer and assistive technology among people with physical disabilities, where only 7 out of 82 surveyed had cp. Whereas Braithwaiteet al (1999) focused on individuals with disabilities, where most were classified has having a physical disability. However the study does not explicitly look at OSNs but rather at online social support within forums for people with disabilities. Studies such as these have not involved the users; defined what constitutes disability or focused on cp without encompassing other disabilities, making it impossible to identify the requirements of OSN users with cp. Initially this PhD research explored the experiences and challenges faced when individuals with cp use OSNs. Fourteen interviews were carried out consisting of participants with variations of the disability. The study identified the reasons for OSN use and non-use and also discovered key themes together with challenges that affected their experiences. This work was followed by an in-context observational study that examined these individuals context of use. The study identified the OSNs and assistive technology used, tasks carried out and users feelings during interaction. As a result of these studies it was determined that changing OSNs prevented and or slowed down these users ability to communicate online. Previous work within human-computer interaction and other disciplines such as software engineering and management science, change is often discussed during software development and is restricted to identifying scenarios and tools that assist change management within information technology (Jarke and Kurkisuonio, 1998). Studies such as these have not considered change deployment or its affect on users, though within HCI such an understanding is limited. Other disciplines i.e. psychology and social sciences have looked at change deployment. Theorists such as Lewin (1952), Lippett (1958) and Griffith (2001) attempt to offer solutions. However no one theory or approach is widely accepted and contradictions, adaptations and exclusions are continually being made. Conversely Woodward and Hendry (2004) and By (2007) have attempted to contend with these difficulties specifically stress as a result of change, believing that if change agents are aware of what an affected individual is thinking during the on set of change it will help to minimise or prevent damage. Studies such as these have focused on software development or organisational change from the perspective of developers or employees, they have not considered OSNs or individuals with cp. To fill this gap a longitudinal OSN monitoring and analysis study was carried out. The study identified how OSN changes are introduced, their affect on users, and the factors that encourage change acceptance or non-acceptance. The study was divided into three studies: two studies investigating realworld examples of OSN change by observing the actions of change agents (Twitter.com and Facebook.com) and their users reactions to the change process. A third study that asked OSN users about their experiences of OSN change was also carried out. A by product of these studies was a unique way of displaying OSN change and user acceptance on a large scale using a infographic and an inductive category model that can be used to examine OSN change. The findings from the five studies were then distilled alongside identified change management approaches and theories to develop an five-stage process for OSN change for change agents to follow. The process defined the requirements for OSN change including the change agent responsibilities before, during and after the change.
66

Developing an integrated MDT service model for the management of patients with lung cancer

Sridhar, Balasubramanian January 2013 (has links)
The motivation for this research was the publication in 1995 of the Calman-Hine report. This provided a strategic framework for the delivery of cancer care by creating a network of cancer care centres in England and Wales to enable patients to receive a uniformly high standard of care. The report acknowledged the fact that although the evidence on optimal cancer care used to prepare the report was based on two key sources (i) medical literature and (ii) audit data provided by UK cancer registries, they did not lend themselves to controlled experiments as most information came from retrospective analyses; hence they were subject to a number of possible flaws and biases. Yet the report recommended some key structural changes to be implemented. The focus of the research described in this thesis was centred on the recommendation of a multidisciplinary team (MDT) review of patients prior to a treatment decision, both in general cancer units as well as in specialised cancer centres. Given the mandate to implement these recommendations, the research questions addressed were “can the current configuration support this recommendation?”, “what evidence was there to support the effectiveness of the MDT?” and “was there a model of care to support the service delivery of cancer care?” A literature review established that there was no existing template upon which MDT services could be set up. This research therefore set out to develop an MDT model to support operational delivery of care in the setting of a cancer centre. The clinical specialty in which this research was undertaken was that of lung cancer. The research successfully developed a conceptual model. However, in the process, a number of operational and practical constraints were identified within the revised service configuration designed to deliver high quality cancer care through the incorporation of the MDT service, and this ultimately limited the extent to which the model could be deployed in the particular clinical setting. Nevertheless, the modelling process did enable a range of core issues to be identified, enabling design solutions to be formulated and tested, thereby confirming the effectiveness of the MDT model. In particular, the adoption of a soft modelling approach was shown to be beneficial in addressing operational problems. By engaging clinical and other end-users right from the start in the modelling process, the models did become operationally accepted, allowing resistance to change to be overcome and the solution to be integrated into the business process. MDT services are now well established, both in cancer units and cancer centres and published data on their effectiveness in the treatment of lung cancer, although not conclusive; demonstrate an increase in resection rates. However, assessing the long-term impact of MDTs on lung cancer outcomes remains a topic for future research.
67

Evaluating human-centered approaches for geovisualization

Lloyd, David January 2009 (has links)
Working with two small group of domain experts I evaluate human-centered approaches to application development which are applicable to geovisualization, following an ISO13407 taxonomy that covers context of use, eliciting requirements, and design. These approaches include field studies and contextual analysis of subjects' context; establishing requirements using a template, via a lecture to communicate geovisualization to subjects and by communicating subjects' context to geovisualization experts with a scenario; autoethnography to understand the geovisualization design process; wireframe, paper and digital interactive prototyping with alternative protocols; and a decision making process for prioritising application improvement. I find that the acquisition and use of real user data is key; that a template approach and teaching subjects about visualization tools and interactions both fail to elicit useful requirements for a visualization application. Consulting geovisualization experts with a scenario of user context and samples of user data does yield suggestions for tools and interactions of use to a visualization designer. The complex and composite natures of both visualization and human-centered domains, incorporating learning from both domains, with user context, makes design challenging. Wireframe, paper and digital interactive prototypes mediate between the user and visualization domains successfully, eliciting exploratory behaviour and suggestions to improve prototypes. Paper prototypes are particularly successful at eliciting suggestions and especially novel visualization improvements. Decision-making techniques prove useful for prioritising different possible improvements, although domain subjects select data-related features over more novel alternative and rank these more inconsistently. The research concludes that understanding subject context of use and data is important and occurs throughout the process of engagement with domain experts, and that standard requirements elicitation techniques are unsuccessful for geovisualization. Engagement with subjects at an early stage with simple prototypes incorporating real subject data and moving to successively more complex prototypes holds the best promise for creating successful geovisualization applications.
68

A knowledge management framework for the telecommunication industry : the KMFTI model

Elashaheb, M. S. January 2005 (has links)
Recent years have witnessed a continuing growth of developments in knowledge management systems to capture the information flows within organisations and tum them into exploitable management databases. Examples to this are such as the Total Quality Management and the Business Process Reengineedng models. There is no doubt, that during the last few years there has been a broad interest of exploiting knowledge. However, traditional Knowledge Management (KM) systems and frameworks do not necessarily take into account the specific nature of the telecommunication industry, particularly those related to capturing, sharing and exploiting unconventional data flows that occur between the personnel on the move such as technicians and engineers. Thus, a large amount of these data is lost and will never be able to benefit the organisation or its employees in any way. Therefore, this research addresses the development of a new KM framework to fill in this gap and provide the telecommunication organisations in general and the General Post and Telecommunication Company(GPITC) in Ubya in particular with a solid base where bulk and rough data will become exploitable and manageable in a concise and intelligent way. The main questions being posed by this research are as follow: Could the Existing Knowledge Management Systems help the GPTC in Libya in particular and the telecommunication industry in general to better manage their data flows and turn them into an exploitable knowledge base? and How a strategic Knowledge Management Framework (KMF) could contribute to establishing adequate guidelines and policies in such telecommunicatioenn vironment? In this regard, the investigations in this research will stress on the Identification of the broad range of issues that are preventing the adoption of KM systems within the GPTC or any given telecommunication organisation rather than trying to focus on a specific and unique question about the exploitation of KM. This approach is justified by the fact that no specific KMS appear to be developed for such Industry. Furthermore, the various parameters are described under this common framework which is expected to benefit the telecommunication sector as a whole.
69

International branch campus faculty member experiences of the academic library

Salaz, Alicia January 2015 (has links)
This thesis uses phenomenography to investigate the perceptions and experiences of academic libraries by faculty members across a variety of disciplines working in international branch campuses (IBCs). The main research question addressed by the study asks how faculty members experience the academic library, with the objective of identifying qualitative variations in experience within this group. The findings of this research address established practical problems related to library value and identity, and have implications for practice in both the development and evaluation of library services for faculty members, as well as communication about those services with faculty members. Furthermore, the findings of this research support practical developments in the support of faculty members engaged in transnational higher education provision. The results of the research find that these participants in this context experienced the academic library in at least six different ways and reported a variety of experiences in terms of using information, in and out of the academic library, to accomplish core faculty member functions of teaching and research. The categories of experience generated through the study are: IBC faculty members experience the academic library as relationships with librarians; as a content provider; as a discovery service; as a facilitator for engaging with the academic community; as a champion of reading books; and as a compliance centre for information ethics. Investigations into the information behaviour, library use and perceptions of faculty members have been conducted in a variety of contexts, but are limited in transnational contexts. This research therefore also represents an original and important contribution to an understanding of academic library practice in transnational or cross-border contexts, as well as contributing to a limited knowledge base about the experiences of faculty members in transnational higher education generally. Phenomenographic investigations into the experiences of library and information science elements such as libraries and information centres are rare, and therefore this research represents an original contribution to understanding this phenomenon in this way. The study employed phenomenography as the methodology for understanding the academic library experiences of the participants. Ten faculty member participants representing a variety of IBC institutions located within major educational hubs in the Arab Gulf and Southeast Asia were interviewed about their academic library experiences moving from a home campus to a branch campus, using the story of this move as a critical incident for starting discussion and relaying real experiences to the researcher. These experiences are theoretically situated in the context of information worlds (Jaeger & Burnett, 2010) in order to increase understanding around the formation of these experiences and to critically analyse practical implications. This research design contributes to the phenomenographic method by detailing its procedures and to its theoretical aspects by linking the methodological with a framework, Jaeger and Burnett’s theory of information worlds, which facilitates phenomenography outside its traditional domain of teaching and learning research.
70

Serious leisure in the digital world : exploring the information behaviour of fan communities

Price, L. January 2017 (has links)
This research investigates the information behaviour of cult media fan communities on the internet, using three novel methods which have not previously been applied to this domain. Firstly, a review, analysis and synthesis of the literature related to fan information behaviour, both within the disciplines of LIS and fan studies, revealed unique aspects of fan information behaviour, particularly in regards to produsage, copyright, and creativity. The findings from this literature analysis were subsequently investigated further using the Delphi method and tag analysis. A new Delphi variant – the Serious Leisure Delphi – was developed through this research. The Delphi study found that participants expressed the greatest levels of consensus on statements on fan behaviour that were related to information behaviour and information-related issues. Tag analysis was used in a novel way, as a tool to examine information behaviour. This found that fans have developed a highly granular classification system for fanworks, and that on one particular repository a ‘curated folksonomy’ was being used with great success. Fans also use tags for a variety of reasons, including communicating with one another, and writing meta-commentary on their posts. The research found that fans have unique information behaviours related to classification, copyright, entrepreneurship, produsage, mentorship and publishing. In the words of Delphi participants – “being in fandom means being in a knowledge space,” and “fandom is a huge information hub just by existing”. From these findings a model of fan information behaviour has been developed, which could be further tested in future research.

Page generated in 0.5614 seconds