• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 185
  • 14
  • 2
  • Tagged with
  • 223
  • 223
  • 223
  • 22
  • 21
  • 19
  • 19
  • 18
  • 17
  • 16
  • 16
  • 15
  • 15
  • 15
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Engineering enhancements for movie recommender systems

Solanki, Sandeep January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / Doina Caragea / The evolution of the World Wide Web has resulted in extremely large amounts of information. As a consequence, users are faced with the problem of information overload: they have difficulty in identifying and selecting items of interest to them, such as books, movies, blogs, bookmarks, etc. Recommender systems can be used to address the information over-load problem by suggesting potentially interesting or useful items to users. Many existing recommender systems rely on the collaborative filtering technology. Among other domains, collaborative filtering systems have been widely used in e-commerce and they have proven to be very successful. However, in recent years the number of users and items available in e-commerce has grown tremendously, challenging recommender systems with scalability issues. To address such issues, we use canopy/clustering techniques and Hadoop MapReduce distributed framework to implement user-based and item-based recommender systems. We evaluate our implementations in the context of movie recommendation. Generally, standard rating prediction schemes work by identifying similar users/items. We propose a novel rating prediction scheme, which makes use of dissimilar users/items, in addition to the similar ones, and experimentally show that the new prediction scheme produces better results than the standard prediction scheme. Finally, we engineer two new approaches for clustering-based collaborative filtering that can make use of movie synopsis and user information. Specifically, in the first approach, we perform user-based clustering using movie synopsis, together with user demographic data. In the second approach, we perform item-based clustering using movie synopsis, together with user quotes about movies. Experimental results show that the movie synopsis and user demographic data can be effectively used to improve the rating predictions made by a recommender system. However, user quotes are too vague and do not produce better predictions.
102

Value creation in a virtual world

Hales, Kieth R Unknown Date (has links)
During the past two decades, increasingly powerful and capable information technologies have made information more accessible and valuable so that it has become the prime resource for business; ahead of the traditional resources of land, labour and capital. Improved information acquisition, usage and distribution has also driven and enabled globalisation. The emergence of the virtual enterprise (VE) is one consequence of changed market conditions and advanced information communications technology (ICT). VE s are characterised by various configurations of networks of collaborating partnerships and intensive ICT linkages. As ICT has become more pervasive, businesses have become increasingly reliant on it for their effective operation so now the question for business strategists is how to create value and sustainable competitive advantage in a virtual world? This thesis offers an answer to that question.This thesis uses rational arguments drawn from a wide variety of research from both the business and ICT disciplines to examine the theoretical foundations of value creation. It explores the development of corporate strategy and value driven sources of competitive advantage from the viewpoints of industrial organisation (IO), the resource based view (RBV) of the firm, innovation, transaction cost economics, network theory and value and supply chains. However, these established strategy theories, whose origins often predate the internet, do not adequately accommodate the expanded roles that information and digital technologies play in creating value in an increasingly digital environment. Alternately, Information Systems research, which is rich in information technology, struggles to accommodate the notion of value as legitimate information systems goal. Virtual organisation (VO) is a new strategic paradigm that is centred on the use of information and ICT to create value. VO is presented as a meta-management strategy that has application in all value oriented organisations. Within the concept of VO, the business model is an ICT based construct that bridges and integrates enterprise strategic and operational concerns.The Virtual Value Creation (VVC) framework is an innovative and novel business model that draws on the concept of virtual organisation. The VVC’s objective is to provide enterprises with a framework to determine their present and potential capability to use available information to create economic value. It owes its inspiration to Porter and Drucker, both of whom emphasised value creation as the legitimate focus for enterprise activity and the source of sustainable competitive advantage. The VVC framework integrates existing and emerging theories to describe the strategic processes and conditions necessary for the exploitation of information in a commercial setting.The VVC framework presently represents a novel and valuable tool that enterprises can use to assess their present and potential use of information to create value in a virtual age.
103

Automated analysis of industrial scale security protocols

Plasto, Daniel Unknown Date (has links)
Security protocols provide a communication architecture upon which security-sensitive distributed applications are built. Flaws in security protocols can expose applications to exploitation and manipulation. A number of formal analysis techniques have been applied to security protocols, with the ultimate goal of verifying whether or not a protocol fulfils its stated security requirements. These tools are limited in a number of ways. They are not fully automated and require considerable effort and expertise to operate. The specification languages often lack expressiveness. Furthermore the model checkers often cannot handle large industrial scale protocols due to the enormous number of states generated.Current research is addressing many of the limitations of the older tools by using state-of-the-art search optimisation and modelling techniques. This dissertation examines new ways in which industrial protocols can be analysed and presents abstract communication channels; a method for explicitly specifying assumptions made about the medium over which participants communicate.
104

A Class of Direct Search Methods for Nonlinear Integer Programming

Sugden, Stephen J Unknown Date (has links)
This work extends recent research in the development of a number of direct search methods in nonlinear integer programming. The various algorithms use an extension of the well-known FORTRAN MINOS code of Murtagh and Saunders as a starting point. MINOS is capable of solving quite large problems in which the objective function is nonlinear and the constraints linear. The original MINOS code has been extended in various ways by Murtagh, Saunders and co-workers since the original 1978 landmark paper. Extensions have dealt with methods to handle both nonlinear constraints, most notably MINOS/AUGMENTED and integer requirements on a subset of the variables(MINTO). The starting point for the present thesis is the MINTO code of Murtagh. MINTO is a direct descendant of MINOS in that it extends the capabilities to general nonlinear constraints and integer restrictions. The overriding goal for the work described in this thesis is to obtain a good integer-feasible or near-integer-feasible solution to the general NLIP problem while trying to avoid or at least minimize the use of the ubiquitous branch-and-bound techniques. In general, we assume a small number of nonlinearities and a small number of integer variables.Some initial ideas motivating the present work are summarised in an invited paper presented by Murtagh at the 1989 CTAC (Computational Techniques and Applications) conference in Brisbane, Australia. The approach discussed there was to start a direct search procedure at the solution of the continuous relaxation of a nonlinear mixed-integer problem by first removing integer variables from the simplex basis, then adjusting integer-infeasible superbasic variables, and finally checking for local optimality by trial unit steps in the integers. This may be followed by a reoptimization with the latest point as the starting point, but integer variables held fixed. We describe ideas for the further development of Murtagh’s direct search method. Both the old and new approaches aim to attain an integer-feasible solution from an initially relaxed (continuous) solution. Techniques such as branch-and-bound or Scarf’s neighbourhood search [84] may then be used to obtain a locally optimal solution. The present range of direct search methods differs significantly to that described by Murtagh, both in heuristics used and major and minor steps of the procedures. Chapter 5 summarizes Murtagh’s original approach while Chapter 6 describes the new methods in detail.Afeature of the new approach is that some degree of user-interaction (MINTO/INTERACTIVE) has been provided, so that a skilled user can "drive" the solution towards optimality if this is desired. Alternatively the code can still be run in "automatic" mode, where one of five available direct search methods may be specified in the customary SPECS file. A selection of nonlinear integer programming problems taken from the literature has been solved and the results are presented here in the latter chapters. Further, anewcommunications network topology and allocation model devised by Berry and Sugden has been successfully solved by the direct search methods presented herein. The results are discussed in Chapter 14, where the approach is compared with the branch-and-bound heuristic.
105

A collaboration framework of selecting software components based on behavioural compatibility with user requirements

Wang, Lei Unknown Date (has links)
Building software systems from previously existing components can save time and effort while increasing productivity. The key to a successful Component-Based Development (CBD) is to get the required components. However, components obtained from other developers often show different behaviours than what are required. Thus adapting the components into the system being developed becomes an extra development and maintenance cost. This cost often offsets the benefits of CBD. Our research goal is to maximise the possibility of finding components that have the required behaviours, so that the component adaptation cost can be minimised. Imprecise component specifications and user requirements are the main reasons that cause the difficulty of finding the required components. Furthermore, there is little support for component users and developers to collaborate and clear the misunderstanding when selecting components, as CBD has two separate development processes for them. In this thesis, we aim at building a framework in which component users and developers can collaborate to select components with tools support, by exchanging component and requirement specifications. These specifications should be precise enough so that behavioural mismatches can be detected. We have defined Simple Component Interface Language (SCIL) as the communication and specification language to capture component behaviours. A combined SCIL specification of component and requirement can be translated to various existing modelling languages. Thus various properties that are supported by those languages can be checked by the related model checking tools. If all the user-required properties are satisfied, then the component is compatible to the user requirement at the behavioural level. Thus the component can be selected. Based on SCIL, we have developed a prototype component selection system and used it in two case studies: finding a spell checker component and searching for the components for a generic e-commerce application. The results of the case studies indicate that our approach can indeed find components that have the required behaviours. Compared to the traditional way of searching by keywords, our approach is able to get more relevant results, so the cost of component adaptation can be reduced. Furthermore, with a collaborative selection process this cost can be minimised. However, our approach has not achieved complete automation due to the modelling inconsistency from different people. Some manual work to adjust user requirements is needed when using our system. The future work will focus on solving this remaining problem of inconsistent modelling, providing an automatic trigger to select proper tools, etc.
106

Secure information flow for inter-organisational collaborative environments

Bracher, Shane Unknown Date (has links)
Collaborative environments allow users to share and access data across networks spanning multiple administrative domains and beyond organisational boundaries. This poses several security concerns such as data confidentiality, data privacy and threats to improper data usage. Traditional access control mechanisms focus on centralised systems and implicitly assume that all resources reside in the one domain. This serves as a critical limitation for inter-organisational collaborative environments, which are characteristically decentralised, distributed and heterogeneous. A consequence of the lack of suitable access control mechanisms for inter-organisational collaborative environments is that data owners relinquish all control over data they release. In these environments, we can reasonably consider more complex cases where documents may have multiple contributors, all with differing access control requirements. Facilitating such cases, as well as maintaining control over the document’s content, its structure and its flow path as it circulates through multiple administrative domains, is a non-trival issue. This thesis proposes an architecture model for specifying and enforcing access control restrictions on sensitive data that follows a pre-defined inter-organisational workflow. Our approach is to embed access control enforcement within the workflow object (e.g. the circulating document containing sensitive data) as opposed to relying on each administrative domain to enforce the access control policies. The architecture model achieves this using cryptographic access control – a concept that relies on cryptography to enforce access control policies.
107

A framework for supporting anonymity in text-based online conversations

Lee, Andrew Unknown Date (has links)
This research has investigated how anonymity has been achieved in text-based online conversations. It has found that anonymity could be attained without any special provision from a conversation system. The absence of face-to-face contact and use of typed remarks are sufficient to create anonymity.Nevertheless, the lack of special provisions can make it difficult for some to use the anonymity they have attained. Preserving such naturally attained anonymity can be equally difficult for users. System administrators will also have trouble controlling anonymity without special provisions. Will deliberate provisions for anonymity remove these problems?The goal of this research is to determine how anonymity in online conversations could and should be supported. An existing conversation system lacking in special support for anonymity has been selected. Every possible change for the benefit of anonymity has been made to this system. The changes that have been made and why they were made are described in this thesis. The impact of those changes is also discussed. The final outcome of this research is a set of guidelines and standards for supporting anonymity in text-based online conversations.
108

Trading in the Australian stockmarket using artificial neural networks

Vanstone, Bruce Unknown Date (has links)
This thesis focuses on training and testing neural networks for use within stockmarket trading systems. It creates and follows a well defined methodology for developing and benchmarking trading systems which contain neural networks.Four neural networks and consequently four trading systems are presented within this thesis. The neural networks are trained using all fundamental or all technical variables, and are trained on different segments of the Australian stockmarket, namely all ordinary shares, and the S&P/ASX200 constituents.Three of the four trading systems containing neural networks significantly outperform the respective buy-and-hold returns for their segments of the market, demonstrating that neural networks are suitable for inclusion in stockmarket trading systems.The fourth trading system performs poorly, and a number of reasons are proposed to explain the poor performance. It is significant, however, that the trading system development methodology defined in this thesis clearly exposes the potential failure when testing in-sample, long before the neural network would be used in real trading.Overall, this thesis concludes that neural networks are suitable for use within trading systems, and that trading systems developed using neural networks can be used to provide economically significant profits.
109

Predicting connectivity in wireless ad hoc networks

Larkin, Henry Unknown Date (has links)
The prevalence of wireless networks is on the increase. Society is becoming increasingly reliant on ubiquitous computing, where mobile devices play a key role. The use of wireless networking is a natural solution to providing connectivity for such devices. However, the availability of infrastructure in wireless networks is often limited. Such networks become dependent on wireless ad hoc networking, where nodes communicate and form paths of communication themselves. Wireless ad hoc networks present novel challenges in contrast to fixed infrastructure networks. The unpredictability of node movement and route availability become issues of significant importance where reliability is desired.To improve reliability in wireless ad hoc networks, predicting future connectivity between mobile devices has been proposed. Predicting connectivity can be employed in a variety of routing protocols to improve route stability and reduce unexpected drop-offs of communication. Previous research in this field has been limited, with few proposals for generating future predictions for mobile nodes. Further work in this field is required to gain a better insight into the effectiveness of various solutions.This thesis proposes such a solution to increase reliability in wireless ad hoc routing. This research presents two novel concepts to achieve this: the Communication Map (CM), and the Future Neighbours Table (FNT). The CM is a signal loss mapping solution. Signal loss maps delineate wireless signal propagation capabilities over physical space. With such a map, connectivity predictions are based on signal capabilities in the environment in which mobile nodes are deployed. This significantly improves accuracy of predictions in this and in previous research. Without such a map available, connectivity predictions have no knowledge of realistic spatial transmission ranges. The FNT is a solution to provide routing algorithms with a predicted list of future periods of connectivity between all nodes in an established wireless ad hoc network. The availability of this information allows route selection in routing protocols to be greatly improved, benefiting connectivity. The FNT is generated from future node positional information combined with the CM to provide predicted signal loss estimations at future intervals. Given acceptable signal loss values, the FNT is constructed as a list of periods of time in which the signal loss between pairs of nodes will rise above or fall below this acceptable value (predicted connectivity). Future node position information is ideally found in automated networks. Robotic nodes commonly operate where future node task movement is developed and planned into the future, ideal for use in predicted connectivity. Non-automated prediction is also possible, as there exist some situations where travel paths can be predictable, such as mobile users on a train or driving on a highway. Where future node movement is available, predictions of connectivity between nodes are possible.
110

Secure information flow for inter-organisational collaborative environments

Bracher, Shane Unknown Date (has links)
Collaborative environments allow users to share and access data across networks spanning multiple administrative domains and beyond organisational boundaries. This poses several security concerns such as data confidentiality, data privacy and threats to improper data usage. Traditional access control mechanisms focus on centralised systems and implicitly assume that all resources reside in the one domain. This serves as a critical limitation for inter-organisational collaborative environments, which are characteristically decentralised, distributed and heterogeneous. A consequence of the lack of suitable access control mechanisms for inter-organisational collaborative environments is that data owners relinquish all control over data they release. In these environments, we can reasonably consider more complex cases where documents may have multiple contributors, all with differing access control requirements. Facilitating such cases, as well as maintaining control over the document’s content, its structure and its flow path as it circulates through multiple administrative domains, is a non-trival issue. This thesis proposes an architecture model for specifying and enforcing access control restrictions on sensitive data that follows a pre-defined inter-organisational workflow. Our approach is to embed access control enforcement within the workflow object (e.g. the circulating document containing sensitive data) as opposed to relying on each administrative domain to enforce the access control policies. The architecture model achieves this using cryptographic access control – a concept that relies on cryptography to enforce access control policies.

Page generated in 0.1806 seconds