• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • No language data
  • Tagged with
  • 43
  • 43
  • 43
  • 43
  • 20
  • 17
  • 13
  • 10
  • 9
  • 9
  • 9
  • 5
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Implementation, use and analysis of open source learning management system 'Moodle' and e-learning for the deaf in Jordan

Khwaldeh, Sufian M. I. A. January 2011 (has links)
When learning mathematics, deaf children of primary school age experience difficulties due to their disability. In Jordan, little research has been undertaken to understand the problems facing deaf children and their teachers. Frequently, children are educated in special schools for the deaf; the majority of deaf children tend not to be integrated into mainstream education although efforts are made to incorporate them into the system. Teachers in the main stream education system rarely have knowledge and experience to enable deaf students to reach their full potential. The methodological approach used in this research is a mixed one consisting of action research and Human Computer interaction (HCI) research. The target group was deaf children aged nine years (at the third grade) and their teachers in Jordanian schools. Mathematics was chosen as the main focus of this study because it is a universal subject with its own concepts and rules and at this level the teachers in the school have sufficient knowledge and experience to teach mathematics topics competently. In order to obtain a better understanding of the problems faced by teachers and the deaf children in learning mathematics, semi-structured interviews were undertaken and questionnaires distributed to teachers. The main aim at that stage of research was to explore the current use and status of the e-learning environment and LMS within the Jordanian schools for the deaf in Jordan. In later stages of this research, semi-structured interviews and questionnaires were used again to ascertain the effectiveness, usability and readiness of the adopted e-learning environment “Moodle. Finally pre-tests and post-tests used to assess the effectiveness of the e-learning environment and LMS. It is important to note that it was not intended to work with the children directly but were used as test subjects. Based on the requirements and recommendations of the teachers of the deaf, a key requirements scheme was developed. Four open source e-learning environments and LMS evaluated against the developed key requirements. The evaluation was based on a software engineering approache. The outcome of that evaluation was the adoption of an open source e-learning environment and LMS called “Moodle”. Moodle was presented to the teachers for the purpose of testing it. It was found it is the most suitable e-learning environment and LMS to be adapted for use by deaf children in Jordan based on the teachers requirements. Then Moodle was presented to the deaf children’s to use during this research. After use, the activities of the deaf and their teachers were used and analysed in terms of Human Computer Interaction (HCI) analysis. The analysis includes the readiness, usability, user satisfaction, ease of use, learnability, outcome/future use, content, collaboration & communication tools and functionality.
12

Information-seeking and perceptions of expertise in an electronic network of practice

Ziebro, Monique C. January 2013 (has links)
This study assesses information-seeking and perceptions of expertise in Electronic Networks of Practice (ENoPs). ENoPs are a particular type of online community focused on sharing information related to a specific work-related profession (Wasko and Faraj, 2005). To date, there has been little empirical work on the dynamics of information exchange in ENoPs (Whelan, 2007). The little we do know is based on face-to-face communities, which cannot be generalized to online interactions due to changes in size, purpose, and method of communication. Understanding the type and perceived value of information is an important line of theoretical inquiry because it has the potential to identify the specific informational needs these communities fulfil and the types of people most likely to fulfil them. This research was conducted in an ENoP focusing on the exchange of information related to the practice of engineering. The community studied, Eng- Tips, is a thriving network focusing on the practice of engineering that has produced over 150,000 posts, and is comprised of engineers from twenty-one different specialties. Interactions take place solely through the use of virtually mediated technology, and focus primarily on practice-related issues. The format of interaction is typically based on a query and a stream of ensuing replies. Data were collected through metrics and a coding procedure that allowed me to identify the most common queries in the ENoP. My data revealed queries in the ENoP tended to focus on obtaining solutions, meta-knowledge, or validation. The high emphasis on validation was similar to that found in face-to-face friendship networks, and was contrary to Cross et al.’s (2001) anticipated results, most likely due to the presence of anonymity. I also found that experience of interacting with multiple specialties (i.e. interactional expertise) was positively associated with perceived expertise. Finally, I discovered that replies, giving out nominations, and frequently logins were positively associated with the number of expert nominations one received in the community. This research makes contributions to both theory and practice. I contribute to theory on information-seeking by extending Cross et al.’s (2001) research to the online environment, and articulating the type of informational benefits sought in the ENoP. I contribute to theory on expertise by exploring the characteristics associated with perceived expertise, and exploring the reasons why interactional expertise may be particularly valued in ENoPs. My work in this area reveals that—in the context of the ENoP studied—a ‘common practice’ is highly fragmented and loosely knit, further distinguishing this entity as a unique organizational form. My findings in this area call into question the validity of a practice-based approach for examining these entities, and for these reasons, I suggest they may be better conceptualized as Electronic Networks of Discourse. Practical ramifications focus on describing the type of information members want to obtain from their involvement in the community, which may benefit members, organizations, and managers of the ENoP.
13

Blood vessel segmentation and shape analysis for quantification of coronary artery stenosis in CT angiography

Wang, Yin January 2011 (has links)
This thesis presents an automated framework for quantitative vascular shape analysis of the coronary arteries, which constitutes an important and fundamental component of an automated image-based diagnostic system. Firstly, an automated vessel segmentation algorithm is developed to extract the coronary arteries based on the framework of active contours. Both global and local intensity statistics are utilised in the energy functional calculation, which allows for dealing with non-uniform brightness conditions, while evolving the contour towards to the desired boundaries without being trapped in local minima. To suppress kissing vessel artifacts, a slice-by-slice correction scheme, based on multiple regions competition, is proposed to identify and track the kissing vessels throughout the transaxial images of the CTA data. Based on the resulting segmentation, we then present a dedicated algorithm to estimate the geometric parameters of the extracted arteries, with focus on vessel bifurcations. In particular, the centreline and associated reference surface of the coronary arteries, in the vicinity of arterial bifurcations, are determined by registering an elliptical cross sectional tube to the desired constituent branch. The registration problem is solved by a hybrid optimisation method, combining local greedy search and dynamic programming, which ensures the global optimality of the solution and permits the incorporation of any hard constraints posed to the tube model within a natural and direct framework. In contrast with conventional volume domain methods, this technique works directly on the mesh domain, thus alleviating the need for image upsampling. The performance of the proposed framework, in terms of efficiency and accuracy, is demonstrated on both synthetic and clinical image data. Experimental results have shown that our techniques are capable of extracting the major branches of the coronary arteries and estimating the related geometric parameters (i.e., the centreline and the reference surface) with a high degree of agreement to those obtained through manual delineation. Particularly, all of the major branches of coronary arteries are successfully detected by the proposed technique, with a voxel-wise error at 0.73 voxels to the manually delineated ground truth data. Through the application of the slice-by-slice correction scheme, the false positive metric, for those coronary segments affected by kissing vessel artifacts, reduces from 294% to 22.5%. In terms of the capability of the presented framework in defining the location of centrelines across vessel bifurcations, the mean square errors (MSE) of the resulting centreline, with respect to the ground truth data, is reduced by an average of 62.3%, when compared with initial estimation obtained using a topological thinning based algorithm.
14

Aligning global and local aspects of a national information programme for health : developing a critical and socio-technical appreciation

Harrop, Stephen Nicholas January 2010 (has links)
Written by a full-time clinician, this thesis explores an example of ‘Big IT’ in healthcare, the National Programme for IT in the United Kingdom National Health Service. It is unique in exploring the interaction between people and information technology in the healthcare workplace, from an engaged standpoint within one of the National Programme’s implementation sites, in order to provide a critical and a socio-technical appreciation.
15

A hybrid machine learning approach to measuring sentiment, credibility and influence on Twitter

Heeley, Robert January 2017 (has links)
Current sentiment analysis on Twitter is hampered by two factors namely, not all accounts are genuine and not all users have the same level of influence. Including non credible and irrelevant Tweets in sentiment analysis dilutes the effectiveness of any sentiment produced. Similarly, counting a Tweet with a potential audience of 10 users as having the same impact as a Tweet that could reach 1 million users is not accurately reflecting its importance. In order to mitigate against these inherent problems a novel method was devised to account for credibility and to measure influence. The current definition of credibility on Twitter was redefined and expanded to incorporate the subtle nuances that exist beyond the simple variance between human or bot account. Once basic sentiment was produced it was filtered by removing non credible Tweets and the remaining sentiment was augmented by weighting it based upon both the user’s and the Tweet’s influence scores. Measuring one person’s opinion is costly and lacking in power, however, machine learning techniques allow us to capture and analyse millions of opinions. Combining a Tweet’s sentiment with the user’s influence score and their credibility rating greatly increases the understanding and usefulness of that sentiment. In order to gauge and measure the impact of this research and highlight its generalisability, this thesis examined 2 distinct real world datasets, the UK General Election 2015 and the Rugby World Cup 2015, which also served to validate the approach used. A better more accurate understanding of sentiment on Twitter has the potential for broad impact from providing targeted advertising that is in tune with people’s needs and desires to providing governments with a better understanding of the will and desire of the people.
16

User redefinition of search goals through interaction with an information retrieval system

Hider, Philip Martin January 2004 (has links)
Search goals of users of information retrieval systems have commonly been assumed to be static and well-formed. However, a significant amount of goal redefinition is detected in the studies described. A pilot study examined user behaviour at a library OPAC, showing that search results would quite frequently induce users to reconsider and revise their search goals, sometimes following up with a new search based on this revision (labeled "strong" goal redefinition). The main analysis employed transaction logs from the OCLC FirstSearch service, investigating what factors, if any, might affect the amount of goal redefinition that takes place during a search session. To this end, ten hypotheses were proposed and considered. Within each search session, logged queries were coded according to their conceptual differences or similarities, in order for indices of strong goal redefinition to be constructed: a chronological content analysis was thus performed on the transaction logs. The indices of redefinition for search sessions on different FirstSearch databases were compared. It was found that different databases induced goal redefinition to different extents. Further analysis showed that the metadata displayed by a database appeared to affect the amount of goal redefinition, and that the presence of abstracts in results was a positive factor, as was the presence of descriptors and identifiers, perhaps because of the former's hyperlinking nature on the FirstSearch interface. On the other hand, no evidence was found to indicate that abstract length has much of an effect on goal redefinition, nor hit rate or levels of precision and recall. Of the two indices of redefinition that were produced, the "refined" index showed signs of greater precision. Implications of the findings are discussed. It is suggested that goal redefinition should be considered a positive result of system feedback, and that systems should readily allow users to follow up on redefined goals. Abstracts and summaries of documents should be presented to the user as frequently as possible, and hyperlinks from key terms in the metadata should also be created to assist evolving searches. The importance of how system feedback is encountered by the user is emphasized in a new model of information retrieval, which embraces the nonconscious as part of the "cognitive viewpoint," allowing for nonconscious information wants to enter into a user's consciousness through cues encountered during the scanning of search results, triggering a redefinition of search goal. This thesis paves the way for a considerable amount of potentially important research, including: further testing and development of the index of goal redefinition; deeper transaction log analyses, perhaps using screen recorders, examining semantic content and contextualizing at the level of the query; and further identification and analysis of the factors affecting goal redefinition, across different types of information retrieval system.
17

Investigating Android permissions and intents for malware detection

Abro, Fauzia Idrees January 2018 (has links)
Today’s smart phones are used for wider range of activities. This extended range of functionalities has also seen the infiltration of new security threats. Android has been the favorite target of cyber criminals. The malicious parties are using highly stealthy techniques to perform the targeted operations, which are hard to detect by the conventional signature and behaviour based approaches. Additionally, the limited resources of mobile device are inadequate to perform the extensive malware detection tasks. Impulsively emerging Android malware merit a robust and effective malware detection solution. In this thesis, we present the PIndroid ― a novel Permissions and Intents based framework for identifying Android malware apps. To the best of author’s knowledge, PIndroid is the first solution that uses a combination of permissions and intents supplemented with ensemble methods for malware detection. It overcomes the drawbacks of some of the existing malware detection methods. Our goal is to provide mobile users with an effective malware detection and prevention solution keeping in view the limited resources of mobile devices and versatility of malware behavior. Our detection engine classifies the apps against certain distinguishing combinations of permissions and intents. We conducted a comparative study of different machine learning algorithms against several performance measures to demonstrate their relative advantages. The proposed approach, when applied to 1,745 real world applications, provides more than 99% accuracy (which is best reported to date). Empirical results suggest that the proposed framework is effective in detection of malware apps including the obfuscated ones. In this thesis, we also present AndroPIn—an Android based malware detection algorithm using Permissions and Intents. It is designed with the methodology proposed in PInDroid. AndroPIn overcomes the limitation of stealthy techniques used by malware by exploiting the usage pattern of permissions and intents. These features, which play a major role in sharing user data and device resources cannot be obfuscated or altered. These vital features are well suited for resource constrained smartphones. Experimental evaluation on a corpus of real-world malware and benign apps demonstrate that the proposed algorithm can effectively detect malicious apps and is resilient to common obfuscations methods. Besides PInDroid and AndroPIn, this thesis consists of three additional studies, which supplement the proposed methodology. First study investigates if there is any correlation between permissions and intents which can be exploited to detect malware apps. For this, the statistical significance test is applied to investigate the correlation between permissions and intents. We found statistical evidence of a strong correlation between permissions and intents which could be exploited to detect malware applications. The second study is conducted to investigate if the performance of classifiers can further be improved with ensemble learning methods. We applied different ensemble methods such as bagging, boosting and stacking. The experiments with ensemble methods yielded much improved results. The third study is related to investigating if the permissions and intents based system can be used to detect the ever challenging colluding apps. Application collusion is an emerging threat to Android based devices. We discuss the current state of research on app collusion and open challenges to the detection of colluding apps. We compare existing approaches and present an integrated approach that can be used to detect the malicious app collusion.
18

User modelling for knowledge sharing in e-mail communication

Kim, Sanghee January 2002 (has links)
This thesis addresses the problem of sharing and transferring knowledge within knowledge-intensive organisations from a user modelling perspective with the purpose of improving individual and group performance. It explores the idea of creating organisational environments from which any of the users involved can benefit by being aware of each other such that sharing expertise between those who are knowledge providers and those who are knowledge seekers can be maximised. In order to encourage individuals to share such valuable expertise, it also explores the idea of keeping a balance between ensuring the availability of information and the increase in user workloads due to the need to handle unwanted information. In an attempt to demonstrate the ideas mentioned above, this research examines the application of user modelling techniques to the development of communication-based task learning systems based on e-mail communication. The design rationale for using e-mail is that personally held expertise is often explicated through e-mail exchanges since it provides a good source for extracting user knowledge. The provision of an automatic message categorisation system that combines knowledge acquired from both statistical and symbolic text learning techniques is one of the three themes of this work. The creation of a new user model that captures the different levels of expertise reflected in exchanged e-mail messages, and makes use of them in linking knowledge providers and knowledge seekers is the second. The design of a new information distribution method to reduce both information overload and underload is the third.
19

Mechanism design for eliciting costly observations in next generation citizen sensor networks

Papakonstantinou, Athanasios January 2010 (has links)
Citizen sensor networks are open information systems in which members of the public act as information providers. The information distributed in such networks ranges from observations of events (e.g. noise measurements or monitoring of environmental parameters) to probabilistic estimates (e.g. projected traffic reports or weather forecasts). However, due to rapid advances in technology such as high speed mobile internet and sophisticated portable devices (from smart-phones to hand-held game consoles), it is expected that citizen sensor networks will evolve. This evolution will be driven by an increase in the number of information providers, since, in the future, it will be much easier to gather and communicate information at a large scale, which in turn, will trigger a transition to more commercial applications. Given this projected evolution, one key difference between future citizen sensor networks and conventional present ones is the emergence of self-interested behaviour, which can manifest in two main ways. First, information providers may choose to commit insufficient resources when producing their observations, and second, they may opt to misreport them. Both aspects of this self-interested behaviour are ignored in current citizen sensor networks. However, as their applications are broadened and commercial applications expand, information providers are likely to demand some kind of payment (e.g. real or virtual currency) for the information they provide. Naturally, those interested in buying this information, will also require guarantees of its quality. It is these issues that we deal with in this thesis through the introduction of a series of novel two-stage mechanisms, based on strictly proper scoring rules. We focus on strictly proper scoring rules, as they have been used in the past as a method of eliciting truthful reporting of predictions in various forecasting scenarios (most notably in weather forecasting). By using payments that are based on such scoring rules, our mechanisms effectively address the issue of selfish behaviour by motivating information providers in a citizen sensor network to, first, invest the resources required by the information buyer in the generation of their observations, and second, to report them truthfully. To begin with, we introduce a mechanism that allows the centre (acting as an information buyer) to select a single agent that can provide a costly observation at a minimum cost. This is the first time a mechanism has been derived for a setting in which the centre has no knowledge of the actual costs involved in the generation of the agents' observations. Building on this, we then make two further contributions to the state of the art, with the introduction of two extensions of this mechanism. First, we extend the mechanism so that it can be applied in a citizen sensor network where the information providers do not have the same resources available for the generation of their observations. These different capabilities are reflected in the quality of the provided observations. Hence, the centre must select multiple agents by eliciting their costs and the maximum precisions of their observations and then ask them to produce these observations. Second, we consider a setting where the information buyer cannot gain any knowledge of the actual outcome beyond what it receives through the agents' reports. Now, because the centre is not able to evaluate the providers' reported observations through external means, it has to rely solely on the reports it receives. It does this by fusing the reports together into one observation which then uses as a means to assess the reports of each of the providers. For the initial mechanism and each of the two extensions, we prove their economic properties (i.e. incentive compatibility and individual rationality) and then present empirical results comparing a number of specific scoring rules, which includes the quadratic, spherical, logarithmic and a parametric family of scoring rules. These results show that although the logarithmic scoring rule minimises the mean and variance of an agent's payment, using it may result in unbounded payments if an agent provides an observation of poor quality. Conversely, the payments of the parametric family exhibit finite bounds and are similar to those of the logarithmic rule for specific values of the parameter. Thus, we show that the parametric scoring rule is the best candidate in our setting. We empirically evaluate both extended mechanisms in the same way, and for the first extension, we show that the mechanism describes a family of possible ways to perform the agent selection, and that there is one that dominates all others. Finally, we compare both extensions with the peer prediction mechanism introduced by \cite{trustsr1} and show that in all three mechanisms the total expected payment is the same, while for both our mechanisms the variance in the total payment is significantly lower.
20

Generic security templates for information system security arguments : mapping security arguments within healthcare systems

He, Ying January 2014 (has links)
Industry reports indicate that the number of security incidents happened in healthcare organisation is increasing. Lessons learned (i.e. the causes of a security incident and the recommendations intended to avoid any recurrence) from those security incidents should ideally inform information security management systems (ISMS). The sharing of the lessons learned is an essential activity in the “follow-up” phase of security incident response lifecycle, which has long been addressed but not given enough attention in academic and industry. This dissertation proposes a novel approach, the Generic Security Template (GST), aiming to feed back the lessons learned from real world security incidents to the ISMS. It adapts graphical Goal Structuring Notations (GSN), to present the lessons learned in a structured manner through mapping them to the security requirements of the ISMS. The suitability of the GST has been confirmed by demonstrating that instances of the GST can be produced from real world security incidents of different countries based on in-depth analysis of case studies. The usability of the GST has been evaluated using a series of empirical studies. The GST is empirically evaluated in terms of its given effectiveness in assisting the communication of the lessons learned from security incidents as compared to the traditional text based approach alone. The results show that the GST can help to improve the accuracy and reduce the mental efforts in assisting the identification of the lessons learned from security incidents and the results are statistically significant. The GST is further evaluated to determine whether users can apply the GST to structure insights derived from a specific security incident. The results show that students with a computer science background can create an instance of the GST. The acceptability of the GST is assessed in a healthcare organisation. Strengths and weaknesses are identified and the GST has been adjusted to fit into organisational needs. The GST is then further tested to examine its capability to feed back the security lessons to the ISMS. The results show that, by using the GST, lessons identified from security incidents from one healthcare organisation in a specific country can be transferred to another and can indeed inform the improvements of the ISMS. In summary, the GST provides a unified way to feed back the lessons learned to the ISMS. It fosters an environment where different stakeholders can speak the same language while exchanging the lessons learned from the security incidents around the world.

Page generated in 0.1423 seconds