• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 3
  • 1
  • Tagged with
  • 50
  • 44
  • 43
  • 43
  • 20
  • 18
  • 13
  • 10
  • 10
  • 9
  • 9
  • 8
  • 7
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

The distinctive nature of making news online : a study of news production at latimes.com and salon.com

Van Dam, Brooke January 2010 (has links)
This thesis provides an inside, in-depth look at how journalists at latimes.com and salon.com came together to create content for their websites over a six month period. It vividly unveils the process of newsmaking by journalists working for organisations whose output is the world wide web. It uses mixed method case studies of two US-based news websites, latimes.com and salon.com, to show how both parentage and net native sites construct a news story. The case studies include direct observation, in-depth interviews and content analysis to deconstruct the process of covering the 2008 Presidential election. The thesis works around Brian McNair‘s cultural chaos paradigm (2006) which explains the emergent nature of news online and the lack of control by any environmental factors that seek to affect its outcome. The thesis begins by outlining the four crucial changes which occur online that are redefining major tenets of journalism both practically and theoretically. It goes on to explain not only how online news has become a destination for many around the world but also why these two online news websites have found a niche for themselves on the Web. The findings of this research outline not only how the newsmaking process exists in these two environments but also how they are creating a new type of convotelling journalism. The 2008 US Presidential election is used as a story to show the unstructured and chaotic network that now exists in how news is gathered, produced, and disseminated online. It goes on to explain the multitude of changing relationships journalists are grappling with as this convotelling newsmaking process occurs. The contrast between the net native and parentage website is dissected to show just how the two sites vary even though their goal is similar. The research concludes making an argument for a hybrid model of journalism being done online that is distinctive in nature.
12

Living in the shadow of suicide : the narrative of an online internet memorial site created by a survivor of bereavement by suicide : a biographical study

Scott, Saffron L. January 2012 (has links)
Online memorials are an Internet phenomena of the 21st century which have been identified as a growing contemporary mourning practice mediated by online computer networks. Online memorials offer a logical discursive platform for a unique form of personalised yet communal virtual memorialising which is mirroring the needs of a fractured and geographically divided society and affords twenty four hour access to all those who use the internet. Online memorials have also been identified as a virtual location where stigma, disenfranchisement and loss of voice in bereavement can be publically noted and challenged. Current research surrounding the use of online memorials has identified that little is known about the creation and use of private memorial sites as they are problematical for researchers to access. This study aimed to address this gap in the research by exploring the creation and use of an online memorial which is both private and relating to a death by suicide which is often considered a socially stigmatised bereavement. The study used auto/biographical research methods utilising a single case study design to explore the narratives of a naturally occurring online memorial alongside an asynchronous email interview with the memorial author. Thematic analysis of the data provided insights into the motivating factors, creation and use of this example of an online memorial. The research also offered insights into the life of the deceased and that of a survivor of bereavement by suicide and in so doing has explored the distinctions between the life lived, the life experienced and the life as told through a form of cultural memorial expression increasingly prevalent in current society. The study also offers consideration of the potential for therapeutic benefit from creating and using online memorials as a mourning activity which could influence Occupational Therapy practice and in so doing identified areas that would benefit from greater research attention to explore further the use/therapeutic use of online memorials.
13

Preserving digital entities: A framework for choosing and testing preservation strategies

Rauch, Carl 11 1900 (has links)
The long-term preservation of digital objects has become increasingly relevant. Libraries, public institutions and museums, but also companies are requesting solutions to store their digital files with all relevant contents and attributes for the future. This master thesis makes two contributions to the research in digital preservation.The first attempt is the creation of a testbed which stores many files in different file formats. These files can be used to evaluate the impact of preservation solutions. In this paper an environment for storing and describing files is being suggested and implemented.The second contribution is made by presenting a framework which is based on Utility Analysis for evaluating different preservation solutions. The application of a detailed hierarchy of objectives, considering the individual requirements of the user, will allow a reasonable and clear decision for a specific preservation solution, which can be supported with arguments. The theoretic framework is evaluated in two casestudies. For the first one the whole process is being realized, for the second example only the major part of the analysis, the objective tree, is treated in detail.
14

Implementation, use and analysis of open source learning management system 'Moodle' and e-learning for the deaf in Jordan

Khwaldeh, Sufian M. I. A. January 2011 (has links)
When learning mathematics, deaf children of primary school age experience difficulties due to their disability. In Jordan, little research has been undertaken to understand the problems facing deaf children and their teachers. Frequently, children are educated in special schools for the deaf; the majority of deaf children tend not to be integrated into mainstream education although efforts are made to incorporate them into the system. Teachers in the main stream education system rarely have knowledge and experience to enable deaf students to reach their full potential. The methodological approach used in this research is a mixed one consisting of action research and Human Computer interaction (HCI) research. The target group was deaf children aged nine years (at the third grade) and their teachers in Jordanian schools. Mathematics was chosen as the main focus of this study because it is a universal subject with its own concepts and rules and at this level the teachers in the school have sufficient knowledge and experience to teach mathematics topics competently. In order to obtain a better understanding of the problems faced by teachers and the deaf children in learning mathematics, semi-structured interviews were undertaken and questionnaires distributed to teachers. The main aim at that stage of research was to explore the current use and status of the e-learning environment and LMS within the Jordanian schools for the deaf in Jordan. In later stages of this research, semi-structured interviews and questionnaires were used again to ascertain the effectiveness, usability and readiness of the adopted e-learning environment “Moodle. Finally pre-tests and post-tests used to assess the effectiveness of the e-learning environment and LMS. It is important to note that it was not intended to work with the children directly but were used as test subjects. Based on the requirements and recommendations of the teachers of the deaf, a key requirements scheme was developed. Four open source e-learning environments and LMS evaluated against the developed key requirements. The evaluation was based on a software engineering approache. The outcome of that evaluation was the adoption of an open source e-learning environment and LMS called “Moodle”. Moodle was presented to the teachers for the purpose of testing it. It was found it is the most suitable e-learning environment and LMS to be adapted for use by deaf children in Jordan based on the teachers requirements. Then Moodle was presented to the deaf children’s to use during this research. After use, the activities of the deaf and their teachers were used and analysed in terms of Human Computer Interaction (HCI) analysis. The analysis includes the readiness, usability, user satisfaction, ease of use, learnability, outcome/future use, content, collaboration & communication tools and functionality.
15

Information-seeking and perceptions of expertise in an electronic network of practice

Ziebro, Monique C. January 2013 (has links)
This study assesses information-seeking and perceptions of expertise in Electronic Networks of Practice (ENoPs). ENoPs are a particular type of online community focused on sharing information related to a specific work-related profession (Wasko and Faraj, 2005). To date, there has been little empirical work on the dynamics of information exchange in ENoPs (Whelan, 2007). The little we do know is based on face-to-face communities, which cannot be generalized to online interactions due to changes in size, purpose, and method of communication. Understanding the type and perceived value of information is an important line of theoretical inquiry because it has the potential to identify the specific informational needs these communities fulfil and the types of people most likely to fulfil them. This research was conducted in an ENoP focusing on the exchange of information related to the practice of engineering. The community studied, Eng- Tips, is a thriving network focusing on the practice of engineering that has produced over 150,000 posts, and is comprised of engineers from twenty-one different specialties. Interactions take place solely through the use of virtually mediated technology, and focus primarily on practice-related issues. The format of interaction is typically based on a query and a stream of ensuing replies. Data were collected through metrics and a coding procedure that allowed me to identify the most common queries in the ENoP. My data revealed queries in the ENoP tended to focus on obtaining solutions, meta-knowledge, or validation. The high emphasis on validation was similar to that found in face-to-face friendship networks, and was contrary to Cross et al.’s (2001) anticipated results, most likely due to the presence of anonymity. I also found that experience of interacting with multiple specialties (i.e. interactional expertise) was positively associated with perceived expertise. Finally, I discovered that replies, giving out nominations, and frequently logins were positively associated with the number of expert nominations one received in the community. This research makes contributions to both theory and practice. I contribute to theory on information-seeking by extending Cross et al.’s (2001) research to the online environment, and articulating the type of informational benefits sought in the ENoP. I contribute to theory on expertise by exploring the characteristics associated with perceived expertise, and exploring the reasons why interactional expertise may be particularly valued in ENoPs. My work in this area reveals that—in the context of the ENoP studied—a ‘common practice’ is highly fragmented and loosely knit, further distinguishing this entity as a unique organizational form. My findings in this area call into question the validity of a practice-based approach for examining these entities, and for these reasons, I suggest they may be better conceptualized as Electronic Networks of Discourse. Practical ramifications focus on describing the type of information members want to obtain from their involvement in the community, which may benefit members, organizations, and managers of the ENoP.
16

Blood vessel segmentation and shape analysis for quantification of coronary artery stenosis in CT angiography

Wang, Yin January 2011 (has links)
This thesis presents an automated framework for quantitative vascular shape analysis of the coronary arteries, which constitutes an important and fundamental component of an automated image-based diagnostic system. Firstly, an automated vessel segmentation algorithm is developed to extract the coronary arteries based on the framework of active contours. Both global and local intensity statistics are utilised in the energy functional calculation, which allows for dealing with non-uniform brightness conditions, while evolving the contour towards to the desired boundaries without being trapped in local minima. To suppress kissing vessel artifacts, a slice-by-slice correction scheme, based on multiple regions competition, is proposed to identify and track the kissing vessels throughout the transaxial images of the CTA data. Based on the resulting segmentation, we then present a dedicated algorithm to estimate the geometric parameters of the extracted arteries, with focus on vessel bifurcations. In particular, the centreline and associated reference surface of the coronary arteries, in the vicinity of arterial bifurcations, are determined by registering an elliptical cross sectional tube to the desired constituent branch. The registration problem is solved by a hybrid optimisation method, combining local greedy search and dynamic programming, which ensures the global optimality of the solution and permits the incorporation of any hard constraints posed to the tube model within a natural and direct framework. In contrast with conventional volume domain methods, this technique works directly on the mesh domain, thus alleviating the need for image upsampling. The performance of the proposed framework, in terms of efficiency and accuracy, is demonstrated on both synthetic and clinical image data. Experimental results have shown that our techniques are capable of extracting the major branches of the coronary arteries and estimating the related geometric parameters (i.e., the centreline and the reference surface) with a high degree of agreement to those obtained through manual delineation. Particularly, all of the major branches of coronary arteries are successfully detected by the proposed technique, with a voxel-wise error at 0.73 voxels to the manually delineated ground truth data. Through the application of the slice-by-slice correction scheme, the false positive metric, for those coronary segments affected by kissing vessel artifacts, reduces from 294% to 22.5%. In terms of the capability of the presented framework in defining the location of centrelines across vessel bifurcations, the mean square errors (MSE) of the resulting centreline, with respect to the ground truth data, is reduced by an average of 62.3%, when compared with initial estimation obtained using a topological thinning based algorithm.
17

Aligning global and local aspects of a national information programme for health : developing a critical and socio-technical appreciation

Harrop, Stephen Nicholas January 2010 (has links)
Written by a full-time clinician, this thesis explores an example of ‘Big IT’ in healthcare, the National Programme for IT in the United Kingdom National Health Service. It is unique in exploring the interaction between people and information technology in the healthcare workplace, from an engaged standpoint within one of the National Programme’s implementation sites, in order to provide a critical and a socio-technical appreciation.
18

A hybrid machine learning approach to measuring sentiment, credibility and influence on Twitter

Heeley, Robert January 2017 (has links)
Current sentiment analysis on Twitter is hampered by two factors namely, not all accounts are genuine and not all users have the same level of influence. Including non credible and irrelevant Tweets in sentiment analysis dilutes the effectiveness of any sentiment produced. Similarly, counting a Tweet with a potential audience of 10 users as having the same impact as a Tweet that could reach 1 million users is not accurately reflecting its importance. In order to mitigate against these inherent problems a novel method was devised to account for credibility and to measure influence. The current definition of credibility on Twitter was redefined and expanded to incorporate the subtle nuances that exist beyond the simple variance between human or bot account. Once basic sentiment was produced it was filtered by removing non credible Tweets and the remaining sentiment was augmented by weighting it based upon both the user’s and the Tweet’s influence scores. Measuring one person’s opinion is costly and lacking in power, however, machine learning techniques allow us to capture and analyse millions of opinions. Combining a Tweet’s sentiment with the user’s influence score and their credibility rating greatly increases the understanding and usefulness of that sentiment. In order to gauge and measure the impact of this research and highlight its generalisability, this thesis examined 2 distinct real world datasets, the UK General Election 2015 and the Rugby World Cup 2015, which also served to validate the approach used. A better more accurate understanding of sentiment on Twitter has the potential for broad impact from providing targeted advertising that is in tune with people’s needs and desires to providing governments with a better understanding of the will and desire of the people.
19

User redefinition of search goals through interaction with an information retrieval system

Hider, Philip Martin January 2004 (has links)
Search goals of users of information retrieval systems have commonly been assumed to be static and well-formed. However, a significant amount of goal redefinition is detected in the studies described. A pilot study examined user behaviour at a library OPAC, showing that search results would quite frequently induce users to reconsider and revise their search goals, sometimes following up with a new search based on this revision (labeled "strong" goal redefinition). The main analysis employed transaction logs from the OCLC FirstSearch service, investigating what factors, if any, might affect the amount of goal redefinition that takes place during a search session. To this end, ten hypotheses were proposed and considered. Within each search session, logged queries were coded according to their conceptual differences or similarities, in order for indices of strong goal redefinition to be constructed: a chronological content analysis was thus performed on the transaction logs. The indices of redefinition for search sessions on different FirstSearch databases were compared. It was found that different databases induced goal redefinition to different extents. Further analysis showed that the metadata displayed by a database appeared to affect the amount of goal redefinition, and that the presence of abstracts in results was a positive factor, as was the presence of descriptors and identifiers, perhaps because of the former's hyperlinking nature on the FirstSearch interface. On the other hand, no evidence was found to indicate that abstract length has much of an effect on goal redefinition, nor hit rate or levels of precision and recall. Of the two indices of redefinition that were produced, the "refined" index showed signs of greater precision. Implications of the findings are discussed. It is suggested that goal redefinition should be considered a positive result of system feedback, and that systems should readily allow users to follow up on redefined goals. Abstracts and summaries of documents should be presented to the user as frequently as possible, and hyperlinks from key terms in the metadata should also be created to assist evolving searches. The importance of how system feedback is encountered by the user is emphasized in a new model of information retrieval, which embraces the nonconscious as part of the "cognitive viewpoint," allowing for nonconscious information wants to enter into a user's consciousness through cues encountered during the scanning of search results, triggering a redefinition of search goal. This thesis paves the way for a considerable amount of potentially important research, including: further testing and development of the index of goal redefinition; deeper transaction log analyses, perhaps using screen recorders, examining semantic content and contextualizing at the level of the query; and further identification and analysis of the factors affecting goal redefinition, across different types of information retrieval system.
20

Investigating Android permissions and intents for malware detection

Abro, Fauzia Idrees January 2018 (has links)
Today’s smart phones are used for wider range of activities. This extended range of functionalities has also seen the infiltration of new security threats. Android has been the favorite target of cyber criminals. The malicious parties are using highly stealthy techniques to perform the targeted operations, which are hard to detect by the conventional signature and behaviour based approaches. Additionally, the limited resources of mobile device are inadequate to perform the extensive malware detection tasks. Impulsively emerging Android malware merit a robust and effective malware detection solution. In this thesis, we present the PIndroid ― a novel Permissions and Intents based framework for identifying Android malware apps. To the best of author’s knowledge, PIndroid is the first solution that uses a combination of permissions and intents supplemented with ensemble methods for malware detection. It overcomes the drawbacks of some of the existing malware detection methods. Our goal is to provide mobile users with an effective malware detection and prevention solution keeping in view the limited resources of mobile devices and versatility of malware behavior. Our detection engine classifies the apps against certain distinguishing combinations of permissions and intents. We conducted a comparative study of different machine learning algorithms against several performance measures to demonstrate their relative advantages. The proposed approach, when applied to 1,745 real world applications, provides more than 99% accuracy (which is best reported to date). Empirical results suggest that the proposed framework is effective in detection of malware apps including the obfuscated ones. In this thesis, we also present AndroPIn—an Android based malware detection algorithm using Permissions and Intents. It is designed with the methodology proposed in PInDroid. AndroPIn overcomes the limitation of stealthy techniques used by malware by exploiting the usage pattern of permissions and intents. These features, which play a major role in sharing user data and device resources cannot be obfuscated or altered. These vital features are well suited for resource constrained smartphones. Experimental evaluation on a corpus of real-world malware and benign apps demonstrate that the proposed algorithm can effectively detect malicious apps and is resilient to common obfuscations methods. Besides PInDroid and AndroPIn, this thesis consists of three additional studies, which supplement the proposed methodology. First study investigates if there is any correlation between permissions and intents which can be exploited to detect malware apps. For this, the statistical significance test is applied to investigate the correlation between permissions and intents. We found statistical evidence of a strong correlation between permissions and intents which could be exploited to detect malware applications. The second study is conducted to investigate if the performance of classifiers can further be improved with ensemble learning methods. We applied different ensemble methods such as bagging, boosting and stacking. The experiments with ensemble methods yielded much improved results. The third study is related to investigating if the permissions and intents based system can be used to detect the ever challenging colluding apps. Application collusion is an emerging threat to Android based devices. We discuss the current state of research on app collusion and open challenges to the detection of colluding apps. We compare existing approaches and present an integrated approach that can be used to detect the malicious app collusion.

Page generated in 0.0168 seconds