• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 130
  • 35
  • 27
  • 9
  • 5
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 318
  • 318
  • 171
  • 130
  • 78
  • 71
  • 52
  • 50
  • 48
  • 48
  • 44
  • 41
  • 38
  • 34
  • 33
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Compile-time optimisation of store usage in lazy functional programs

Hamilton, Geoffrey William January 1993 (has links)
Functional languages offer a number of advantages over their imperative counterparts. However, a substantial amount of the time spent on processing functional programs is due to the large amount of storage management which must be performed. Two apparent reasons for this are that the programmer is prevented from including explicit storage management operations in programs which have a purely functional semantics, and that more readable programs are often far from optimal in their use of storage. Correspondingly, two alternative approaches to the optimisation of store usage at compile-time are presented in this thesis. The first approach is called compile-time garbage collection. This approach involves determining at compile-time which cells are no longer required for the evaluation of a program, and making these cells available for further use. This overcomes the problem of a programmer not being able to indicate explicitly that a store cell can be made available for further use. Three different methods for performing compile-time garbage collection are presented in this thesis; compile-time garbage marking, explicit deallocation and destructive allocation. Of these three methods, it is found that destructive allocation is the only method which is of practical use. The second approach to the optimisation of store usage is called compile-time garbage avoidance. This approach involves transforming programs into semantically equivalent programs which produce less garbage at compile-time. This attempts to overcome the problem of more readable programs being far from optimal in their use of storage. In this thesis, it is shown how to guarantee that the process of compile-time garbage avoidance will terminate. Both of the described approaches to the optimisation of store usage make use of the information obtained by usage counting analysis. This involves counting the number of times each value in a program is used. In this thesis, a reference semantics is defined against which the correctness of usage counting analyses can be proved. A usage counting analysis is then defined and proved to be correct with respect to this reference semantics. The information obtained by this analysis is used to annotate programs for compile-time garbage collection, and to guide the transformation when compile-time garbage avoidance is performed. It is found that compile-time garbage avoidance produces greater increases in efficiency than compile-time garbage collection, but much of the garbage which can be collected by compile-time garbage collection cannot be avoided at compile-time. The two approaches are therefore complementary, and the expressions resulting from compile-time garbage avoidance transformations can be annotated for compile-time garbage collection to further optimise the use of storage.
102

Cerebral palsy, online social networks and change

Lewis, Makayla January 2013 (has links)
In 2011, 19.2 million households in the United Kingdom had access to the Internet. Online social networks (OSN) such as Facebook, Twitter, MySpace, Bebo and YouTube have proved to be the most popular Internet activity (Office of National Statistics, 2011). 49% of these users have updated or created an OSN profile and are making over 24 million visits a month (Dutton, 2009). These websites are often directed at a broad market i.e. people without disabilities. Unfortunately people with disabilities, especially those with physical impairments, often have a greater risk of experiencing loneliness than people without a disability as a result of their mobility, access and or communication impairments. Conventional communication methods such as face-to-face communication, telephone communication and text message communication are often difficult to use and can limit the opportunities for people with disabilities to engage in successful socialisation with family members and friends (Braithwaiteet al, 1999). Therefore people with disabilities can often see online communication, especially OSNs, as an attractive alternative. Previous studies such as Braithwaite et al(1999), Ellis and Kent (2010) and Dobransky and Hargittai (2006) suggests that OSNs are opening a new world to individuals with disabilities. They help these individuals, especially those exhibiting lifelong physical challenges to carry out social interaction which they would otherwise not be able to do within the analogue world. However due to inaccessible features presented in the technology for example features requiring JavaScript, hard-coded text size and Captcha (AbilityNet, 2008; Cahill and Hollier, 2009 andAsuncion, 2010) access to OSNs is often difficult. The overarching purpose of this PhD research is to understand the experiences and challenges faced when people with the physical disability cerebral palsy (cp) use OSNs. It is estimated that 1 in 400 children born in the UK is affected by cp (Scope Response, 2007). The disability can present itself in a variety of ways and to varying degrees. There is no cure for cp, however management to increase social interaction especially through technological innovations is often encouraged (United Cerebral Palsy, 2001; Sharan, 2005 and Colledge, 2006). Previous studies such as AbilityNet (2008), Cahill and Hollier (2009), and Boudreau (2011) have explored mainstream OSNs use from the perspective of users with disabilities, i.e. blind and visually or cognitively impaired, but have placed great emphasis on investigating inaccessibility of OSNs without involving these users. Other studies such as Manna (2005) and Belchiorb et al (2005) have used statistical methods such as surveys and questionnaires to identify Internet use among people with unspecified disabilities. Conversely Asuncion (2010) has taken a broader approach involving OSN users using high-level taxonomies to classify their disabilities, and Marshall et al (2006) focused on a specific disability type, cognitive impairments, without considering the variety of limitations present within the disability. Other studies such as Pell (1999) have taken a broader yet more specific approach and looked at technology use, especially computer and assistive technology among people with physical disabilities, where only 7 out of 82 surveyed had cp. Whereas Braithwaiteet al (1999) focused on individuals with disabilities, where most were classified has having a physical disability. However the study does not explicitly look at OSNs but rather at online social support within forums for people with disabilities. Studies such as these have not involved the users; defined what constitutes disability or focused on cp without encompassing other disabilities, making it impossible to identify the requirements of OSN users with cp. Initially this PhD research explored the experiences and challenges faced when individuals with cp use OSNs. Fourteen interviews were carried out consisting of participants with variations of the disability. The study identified the reasons for OSN use and non-use and also discovered key themes together with challenges that affected their experiences. This work was followed by an in-context observational study that examined these individuals context of use. The study identified the OSNs and assistive technology used, tasks carried out and users feelings during interaction. As a result of these studies it was determined that changing OSNs prevented and or slowed down these users ability to communicate online. Previous work within human-computer interaction and other disciplines such as software engineering and management science, change is often discussed during software development and is restricted to identifying scenarios and tools that assist change management within information technology (Jarke and Kurkisuonio, 1998). Studies such as these have not considered change deployment or its affect on users, though within HCI such an understanding is limited. Other disciplines i.e. psychology and social sciences have looked at change deployment. Theorists such as Lewin (1952), Lippett (1958) and Griffith (2001) attempt to offer solutions. However no one theory or approach is widely accepted and contradictions, adaptations and exclusions are continually being made. Conversely Woodward and Hendry (2004) and By (2007) have attempted to contend with these difficulties specifically stress as a result of change, believing that if change agents are aware of what an affected individual is thinking during the on set of change it will help to minimise or prevent damage. Studies such as these have focused on software development or organisational change from the perspective of developers or employees, they have not considered OSNs or individuals with cp. To fill this gap a longitudinal OSN monitoring and analysis study was carried out. The study identified how OSN changes are introduced, their affect on users, and the factors that encourage change acceptance or non-acceptance. The study was divided into three studies: two studies investigating realworld examples of OSN change by observing the actions of change agents (Twitter.com and Facebook.com) and their users reactions to the change process. A third study that asked OSN users about their experiences of OSN change was also carried out. A by product of these studies was a unique way of displaying OSN change and user acceptance on a large scale using a infographic and an inductive category model that can be used to examine OSN change. The findings from the five studies were then distilled alongside identified change management approaches and theories to develop an five-stage process for OSN change for change agents to follow. The process defined the requirements for OSN change including the change agent responsibilities before, during and after the change.
103

Developing an integrated MDT service model for the management of patients with lung cancer

Sridhar, Balasubramanian January 2013 (has links)
The motivation for this research was the publication in 1995 of the Calman-Hine report. This provided a strategic framework for the delivery of cancer care by creating a network of cancer care centres in England and Wales to enable patients to receive a uniformly high standard of care. The report acknowledged the fact that although the evidence on optimal cancer care used to prepare the report was based on two key sources (i) medical literature and (ii) audit data provided by UK cancer registries, they did not lend themselves to controlled experiments as most information came from retrospective analyses; hence they were subject to a number of possible flaws and biases. Yet the report recommended some key structural changes to be implemented. The focus of the research described in this thesis was centred on the recommendation of a multidisciplinary team (MDT) review of patients prior to a treatment decision, both in general cancer units as well as in specialised cancer centres. Given the mandate to implement these recommendations, the research questions addressed were “can the current configuration support this recommendation?”, “what evidence was there to support the effectiveness of the MDT?” and “was there a model of care to support the service delivery of cancer care?” A literature review established that there was no existing template upon which MDT services could be set up. This research therefore set out to develop an MDT model to support operational delivery of care in the setting of a cancer centre. The clinical specialty in which this research was undertaken was that of lung cancer. The research successfully developed a conceptual model. However, in the process, a number of operational and practical constraints were identified within the revised service configuration designed to deliver high quality cancer care through the incorporation of the MDT service, and this ultimately limited the extent to which the model could be deployed in the particular clinical setting. Nevertheless, the modelling process did enable a range of core issues to be identified, enabling design solutions to be formulated and tested, thereby confirming the effectiveness of the MDT model. In particular, the adoption of a soft modelling approach was shown to be beneficial in addressing operational problems. By engaging clinical and other end-users right from the start in the modelling process, the models did become operationally accepted, allowing resistance to change to be overcome and the solution to be integrated into the business process. MDT services are now well established, both in cancer units and cancer centres and published data on their effectiveness in the treatment of lung cancer, although not conclusive; demonstrate an increase in resection rates. However, assessing the long-term impact of MDTs on lung cancer outcomes remains a topic for future research.
104

Evaluating human-centered approaches for geovisualization

Lloyd, David January 2009 (has links)
Working with two small group of domain experts I evaluate human-centered approaches to application development which are applicable to geovisualization, following an ISO13407 taxonomy that covers context of use, eliciting requirements, and design. These approaches include field studies and contextual analysis of subjects' context; establishing requirements using a template, via a lecture to communicate geovisualization to subjects and by communicating subjects' context to geovisualization experts with a scenario; autoethnography to understand the geovisualization design process; wireframe, paper and digital interactive prototyping with alternative protocols; and a decision making process for prioritising application improvement. I find that the acquisition and use of real user data is key; that a template approach and teaching subjects about visualization tools and interactions both fail to elicit useful requirements for a visualization application. Consulting geovisualization experts with a scenario of user context and samples of user data does yield suggestions for tools and interactions of use to a visualization designer. The complex and composite natures of both visualization and human-centered domains, incorporating learning from both domains, with user context, makes design challenging. Wireframe, paper and digital interactive prototypes mediate between the user and visualization domains successfully, eliciting exploratory behaviour and suggestions to improve prototypes. Paper prototypes are particularly successful at eliciting suggestions and especially novel visualization improvements. Decision-making techniques prove useful for prioritising different possible improvements, although domain subjects select data-related features over more novel alternative and rank these more inconsistently. The research concludes that understanding subject context of use and data is important and occurs throughout the process of engagement with domain experts, and that standard requirements elicitation techniques are unsuccessful for geovisualization. Engagement with subjects at an early stage with simple prototypes incorporating real subject data and moving to successively more complex prototypes holds the best promise for creating successful geovisualization applications.
105

A human factors perspective on volunteered geographic information

Parker, Christopher J. January 2012 (has links)
This thesis takes a multidisciplinary approach to understanding the unique abilities of Volunteered Geographic Information (VGI) to enhance the utility of online mashups in ways not achievable with Professional Geographic Information (PGI). The key issues currently limiting the use of successful of VGI are the concern for quality, accuracy and value of the information, as well as the polarisation and bias of views within the user community. This thesis reviews different theoretical approaches in Human Factors, Geography, Information Science and Computer Science to help understand the notion of user judgements relative to VGI within an online environment (Chapter 2). Research methods relevant to a human factors investigation are also discussed (Chapter 3). (Chapter 5) The scoping study established the fundamental insights into the terminology and nature of VGI and PGI, a range of users were engaged through a series of qualitative interviews. This led the development of a framework on VGI (Chapter 4), and comparative description of users in relation to one another through a value framework (Chapter 5). Study Two produced qualitative multi-methods investigation into how users perceive VGI and PGI in use (Chapter 6), demonstrating similarities and the unique ability for VGI to provide utility to consumers. Chapter Seven and Study Three brought insight into the specific abilities for VGI to enhance the user judgement of online information within an information relevance context (Chapter 7 and 8). In understanding the outcomes of these studies, this thesis discusses how users perceive VGI as different from PGI in terms of its benefit to consumers from a user centred design perspective (Chapter 9). In particular, the degree to which user concerns are valid, the limitation of VGI in application and its potential strengths in enriching the user experiences of consumers engaged within an information search. In conclusion, specific contributions and avenues for further work are highlighted (Chapter 10).
106

Towards open access : managerial, technical, economic and cultural aspects of improving access to research outputs from the perspective of a library and information services provider in a research university

Pinfield, Stephen January 2011 (has links)
For academic research to release its value, it has to be communicated. It is essential, if research is to flourish, that the various forms of research communication, including journal articles and similar research outputs, are as easily and widely available as possible. The publications in this submission, produced between 1998 and 2010, all discuss major aspects (managerial, technical, economic and cultural) of improving access to research outputs in order to support research activity in higher education institutions. The later works focus in particular on the issue of ‘open access’ (OA) publishing and dissemination. The publications investigate the why and how of OA. Firstly, they examine the potential benefits (and dis-benefits) of OA for the research community and other stakeholders. Secondly, they discuss how OA systems and services might operate in practice. The earlier works on OA focus on repositories, particularly institutional repositories. Some of the later publications bring into consideration OA journals and their (potential) ongoing relationship with repositories. The publications are written from the perspective of a library and information services provider in a research university. They report on ground-breaking action-based research-and-development work: setting up innovative demonstrator systems, developing new business processes, and designing novel organisational policies. Possible future scenarios are modelled and analysed. It is shown that these activities have made a significant impact on wider professional practice, as well as contributing to the research literature, as OA has became more mainstream. Major themes discussed include managerial challenges associated with implementing OA services; technical issues relating to the development of systems and standards; economic factors covering costs, funding streams and business models; and cultural issues, including disciplinary differences. These are examined in relation to different stakeholder groups at institutional, national and system-wide levels. Other key themes include intellectual property rights and quality assurance. A clearer picture of possible research-communication futures incorporating OA is developed.
107

The role of online support communities for people experiencing infertility

Malik, Sumaira January 2010 (has links)
People faced with infertility will often experience a strong need for psychosocial support and guidance; a need which is not always adequately met by existing sources of support. The growth in access to the Internet over recent years has opened up new opportunities for people affected by infertility to seek support, advice, and information through the means of an online support community. These online communities can potentially play an important role in addressing the support and information needs of people experiencing infertility; by improving their ability to access peer and professional support. Additionally, online communities may offer a more welcoming and comfortable environment in which these individual's can share their infertility experiences and concerns. This thesis adopted a triangulated approach to research the potential role of online communities in helping people cope with the challenges of infertility. An initial qualitative study was conducted with 95 people accessing online infertility support communities to explore their motives, perceptions, and experiences of online support seeking. Responses revealed that participants especially valued the unique characteristics of computer-mediated communication (e.g. anonymity, asynchrony etc), which appeared to facilitate their ability to access and seek support. In addition, there were a number of psychosocial benefits associated with the online support communities, which appeared to aid the participant's ability to cope with their infertility experiences. Key benefits included reduced feelings of isolation and loneliness, improvements in marital relationships and access to a unique and valuable source of emotional and informational support. This study was followed by a content analysis of the therapeutic and self help mechanisms used in 3,500 messages posted to a popular UK online infertility support community. Results from this stage suggested that the key functions of the online support community were to exchange support and empathy and provide a forum for individuals to share their personal experiences related to infertility. Results also revealed that on the whole communication within the online support community was extremely positive and constructive, offering group members the opportunity to utilise many of the therapeutic and self-help mechanisms that are known to be beneficial to people using face-to-face support networks. The issues and questions raised in these initial studies were further examined in a larger scale survey with 295 users of online infertility support communities. This study quantitatively examined the use and experience of online infertility support communities and how this relates to psychosocial well-being. Results revealed that the majority of participants considered there to be a range of important benefits from accessing online communities. However the study also identified a number of potential disadvantages to accessing online infertility support communities, which appeared to have an impact on the experiences and psychosocial wellbeing of infertile individuals. The theoretical, methodological and practical implications of these findings are discussed.
108

A knowledge management framework for the telecommunication industry : the KMFTI model

Elashaheb, M. S. January 2005 (has links)
Recent years have witnessed a continuing growth of developments in knowledge management systems to capture the information flows within organisations and tum them into exploitable management databases. Examples to this are such as the Total Quality Management and the Business Process Reengineedng models. There is no doubt, that during the last few years there has been a broad interest of exploiting knowledge. However, traditional Knowledge Management (KM) systems and frameworks do not necessarily take into account the specific nature of the telecommunication industry, particularly those related to capturing, sharing and exploiting unconventional data flows that occur between the personnel on the move such as technicians and engineers. Thus, a large amount of these data is lost and will never be able to benefit the organisation or its employees in any way. Therefore, this research addresses the development of a new KM framework to fill in this gap and provide the telecommunication organisations in general and the General Post and Telecommunication Company(GPITC) in Ubya in particular with a solid base where bulk and rough data will become exploitable and manageable in a concise and intelligent way. The main questions being posed by this research are as follow: Could the Existing Knowledge Management Systems help the GPTC in Libya in particular and the telecommunication industry in general to better manage their data flows and turn them into an exploitable knowledge base? and How a strategic Knowledge Management Framework (KMF) could contribute to establishing adequate guidelines and policies in such telecommunicatioenn vironment? In this regard, the investigations in this research will stress on the Identification of the broad range of issues that are preventing the adoption of KM systems within the GPTC or any given telecommunication organisation rather than trying to focus on a specific and unique question about the exploitation of KM. This approach is justified by the fact that no specific KMS appear to be developed for such Industry. Furthermore, the various parameters are described under this common framework which is expected to benefit the telecommunication sector as a whole.
109

Behaviour based anomaly detection system for smartphones using machine learning algorithm

Majeed, Khurram January 2015 (has links)
In this research, we propose a novel, platform independent behaviour-based anomaly detection system for smartphones. The fundamental premise of this system is that every smartphone user has unique usage patterns. By modelling these patterns into a profile we can uniquely identify users. To evaluate this hypothesis, we conducted an experiment in which a data collection application was developed to accumulate real-life dataset consisting of application usage statistics, various system metrics and contextual information from smartphones. Descriptive statistical analysis was performed on our dataset to identify patterns of dissimilarity in smartphone usage of the participants of our experiment. Following this analysis, a Machine Learning algorithm was applied on the dataset to create a baseline usage profile for each participant. These profiles were compared to monitor deviations from baseline in a series of tests that we conducted, to determine the profiling accuracy. In the first test, seven day smartphone usage data consisting of eight features and an observation interval of one hour was used and an accuracy range of 73.41% to 100% was achieved. In this test, 8 out 10 user profiles were more than 95% accurate. The second test, utilised the entire dataset and achieved average accuracy of 44.50% to 95.48%. Not only these results are very promising in differentiating participants based on their usage, the implications of this research are far reaching as our system can also be extended to provide transparent, continuous user authentication on smartphones or work as a risk scoring engine for other Intrusion Detection System.
110

Real time detection of malicious webpages using machine learning techniques

Ahmed, Shafi January 2015 (has links)
In today's Internet, online content and especially webpages have increased exponentially. Alongside this huge rise, the number of users has also amplified considerably in the past two decades. Most responsible institutions such as banks and governments follow specific rules and regulations regarding conducts and security. But, most websites are designed and developed using little restrictions on these issues. That is why it is important to protect users from harmful webpages. Previous research has looked at to detect harmful webpages, by running the machine learning models on a remote website. The problem with this approach is that the detection rate is slow, because of the need to handle large number of webpages. There is a gap in knowledge to research into which machine learning algorithms are capable of detecting harmful web applications in real time on a local machine. The conventional method of detecting malicious webpages is going through the black list and checking whether the webpages are listed. Black list is a list of webpages which are classified as malicious from a user's point of view. These black lists are created by trusted organisations and volunteers. They are then used by modern web browsers such as Chrome, Firefox, Internet Explorer, etc. However, black list is ineffective because of the frequent-changing nature of webpages, growing numbers of webpages that pose scalability issues and the crawlers' inability to visit intranet webpages that require computer operators to login as authenticated users. The thesis proposes to use various machine learning algorithms, both supervised and unsupervised to categorise webpages based on parsing their features such as content (which played the most important role in this thesis), URL information, URL links and screenshots of webpages. The features were then converted to a format understandable by machine learning algorithms which analysed these features to make one important decision: whether a given webpage is malicious or not, using commonly available software and hardware. Prototype tools were developed to compare and analyse the efficiency of these machine learning techniques. These techniques include supervised algorithms such as Support Vector Machine, Naïve Bayes, Random Forest, Linear Discriminant Analysis, Quantitative Discriminant Analysis and Decision Tree. The unsupervised techniques are Self-Organising Map, Affinity Propagation and K-Means. Self-Organising Map was used instead of Neural Networks and the research suggests that the new version of Neural Network i.e. Deep Learning would be great for this research. The supervised algorithms performed better than the unsupervised algorithms and the best out of all these techniques is SVM that achieves 98% accuracy. The result was validated by the Chrome extension which used the classifier in real time. Unsupervised algorithms came close to supervised algorithms. This is surprising given the fact that they do not have access to the class information beforehand.

Page generated in 0.4851 seconds