• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 185
  • 14
  • 2
  • Tagged with
  • 223
  • 223
  • 223
  • 22
  • 21
  • 19
  • 19
  • 18
  • 17
  • 16
  • 16
  • 15
  • 15
  • 15
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Value creation in a virtual world

Hales, Kieth R Unknown Date (has links)
During the past two decades, increasingly powerful and capable information technologies have made information more accessible and valuable so that it has become the prime resource for business; ahead of the traditional resources of land, labour and capital. Improved information acquisition, usage and distribution has also driven and enabled globalisation. The emergence of the virtual enterprise (VE) is one consequence of changed market conditions and advanced information communications technology (ICT). VE s are characterised by various configurations of networks of collaborating partnerships and intensive ICT linkages. As ICT has become more pervasive, businesses have become increasingly reliant on it for their effective operation so now the question for business strategists is how to create value and sustainable competitive advantage in a virtual world? This thesis offers an answer to that question.This thesis uses rational arguments drawn from a wide variety of research from both the business and ICT disciplines to examine the theoretical foundations of value creation. It explores the development of corporate strategy and value driven sources of competitive advantage from the viewpoints of industrial organisation (IO), the resource based view (RBV) of the firm, innovation, transaction cost economics, network theory and value and supply chains. However, these established strategy theories, whose origins often predate the internet, do not adequately accommodate the expanded roles that information and digital technologies play in creating value in an increasingly digital environment. Alternately, Information Systems research, which is rich in information technology, struggles to accommodate the notion of value as legitimate information systems goal. Virtual organisation (VO) is a new strategic paradigm that is centred on the use of information and ICT to create value. VO is presented as a meta-management strategy that has application in all value oriented organisations. Within the concept of VO, the business model is an ICT based construct that bridges and integrates enterprise strategic and operational concerns.The Virtual Value Creation (VVC) framework is an innovative and novel business model that draws on the concept of virtual organisation. The VVC’s objective is to provide enterprises with a framework to determine their present and potential capability to use available information to create economic value. It owes its inspiration to Porter and Drucker, both of whom emphasised value creation as the legitimate focus for enterprise activity and the source of sustainable competitive advantage. The VVC framework integrates existing and emerging theories to describe the strategic processes and conditions necessary for the exploitation of information in a commercial setting.The VVC framework presently represents a novel and valuable tool that enterprises can use to assess their present and potential use of information to create value in a virtual age.
112

A Class of Direct Search Methods for Nonlinear Integer Programming

Sugden, Stephen J Unknown Date (has links)
This work extends recent research in the development of a number of direct search methods in nonlinear integer programming. The various algorithms use an extension of the well-known FORTRAN MINOS code of Murtagh and Saunders as a starting point. MINOS is capable of solving quite large problems in which the objective function is nonlinear and the constraints linear. The original MINOS code has been extended in various ways by Murtagh, Saunders and co-workers since the original 1978 landmark paper. Extensions have dealt with methods to handle both nonlinear constraints, most notably MINOS/AUGMENTED and integer requirements on a subset of the variables(MINTO). The starting point for the present thesis is the MINTO code of Murtagh. MINTO is a direct descendant of MINOS in that it extends the capabilities to general nonlinear constraints and integer restrictions. The overriding goal for the work described in this thesis is to obtain a good integer-feasible or near-integer-feasible solution to the general NLIP problem while trying to avoid or at least minimize the use of the ubiquitous branch-and-bound techniques. In general, we assume a small number of nonlinearities and a small number of integer variables.Some initial ideas motivating the present work are summarised in an invited paper presented by Murtagh at the 1989 CTAC (Computational Techniques and Applications) conference in Brisbane, Australia. The approach discussed there was to start a direct search procedure at the solution of the continuous relaxation of a nonlinear mixed-integer problem by first removing integer variables from the simplex basis, then adjusting integer-infeasible superbasic variables, and finally checking for local optimality by trial unit steps in the integers. This may be followed by a reoptimization with the latest point as the starting point, but integer variables held fixed. We describe ideas for the further development of Murtagh’s direct search method. Both the old and new approaches aim to attain an integer-feasible solution from an initially relaxed (continuous) solution. Techniques such as branch-and-bound or Scarf’s neighbourhood search [84] may then be used to obtain a locally optimal solution. The present range of direct search methods differs significantly to that described by Murtagh, both in heuristics used and major and minor steps of the procedures. Chapter 5 summarizes Murtagh’s original approach while Chapter 6 describes the new methods in detail.Afeature of the new approach is that some degree of user-interaction (MINTO/INTERACTIVE) has been provided, so that a skilled user can "drive" the solution towards optimality if this is desired. Alternatively the code can still be run in "automatic" mode, where one of five available direct search methods may be specified in the customary SPECS file. A selection of nonlinear integer programming problems taken from the literature has been solved and the results are presented here in the latter chapters. Further, anewcommunications network topology and allocation model devised by Berry and Sugden has been successfully solved by the direct search methods presented herein. The results are discussed in Chapter 14, where the approach is compared with the branch-and-bound heuristic.
113

Near Me – a location-aware to-do Android application

Garlapati, Deepti Reddy January 1900 (has links)
Master of Science / Computing and Information Sciences / Daniel A. Andresen / The growing needs of different products is only being increased from day to day and among these numerous products that each person plan to purchase, it has become a tedious task to keep track of all the products that should be purchased. One such important thing is that, everyone wish to keep track of an item when the location associated with the item is nearby. We have many To-Do applications where we can just note down our day to day needs and things to get. But we might face situations like forgetting to keep track of what we have in our To Do list related to buying an item and when the location associated to the item is nearby, there are high possibilities that we overlook and just forget about purchasing these items. These situations occurred most of the times when I did a small survey among my friends. This difficulty has lead me to rethink and find a solution. The usage of smart phones has become very common these days. Android market which is an open source has helped many people to develop their own applications and these could easily be run on Android smart phones. I thought of developing an android application that helps in tracking not only the To-Do list of the items that a person tracks down to purchase but also stores the location where that item can be purchased. This To-Do app then provide notifications when a person is nearby the location associated with the item. It also triggers an alarm so that the user can easily remember what item he has planned to get in that particular location. The proposed app tries to solve most of the problems by providing an intuitive interface to the user where the user can note down all their planned purchases with location of the products and get reminders about it when passing through that location. The Near Me application is about tracking the items to be purchased or the tasks that are to be done specific to the location. Each to-do item is associated with a date, location and notes. Storing the locations in the application helps the user with timely notifications and alarms according to the location the user is in and the tasks that should be done in that location. Also these To-Do items can also be synced with online storage application like Dropbox.
114

Contract-based verification and test case generation for open systems

Deng, Xianghua January 1900 (has links)
Doctor of Philosophy / Department of Computing and Information Sciences / John M. Hatcliff / Current practices in software development heavily emphasize the development of reusable and modular software, which allow software components to be developed and maintained independently. While a component-oriented approach offers a number of benefits, it presents several quality assurance challenges including validating the correctness of individual components as well as their integration. Design-by-contract (DBC) offers a promising solution that emphasizes precisely defined and checkable interface specifications for software components. However, existing tools for the DBC paradigm often have some weaknesses: (1) they have difficulty in dealing with dynamically allocated data; (2) specification and checking efforts are disconnected from quality assurance tools; and (3) user feedback is quite poor. We present Kiasan, a framework that synergistically combines a number of automated reasoning techniques including symbolic execution, model checking, theorem proving, and constraint solving to support design-by-contract reasoning of object-oriented programs written in languages such as Java and C#. Compared to existing approaches to Java contract verification, Kiasan can check much stronger behavioral properties of object-oriented software including properties that make extensive use of heap-allocated data and provide stronger coverage guarantees. In addition, Kiasan naturally generates counter examples illustrating contract violations, visualization of code effects, and JUnit test cases that are driven by code and user-supplied specifications. Coverage/- cost trade-offs are controlled by user-specified bounds on the length of heap-reference chains and number of loop iterations. Kiasan’s unit test case generation facilities compare very favorably with similar tools. Finally, in contrast to other approaches based on symbolic execution, Kiasan has a rigorous foundation: we have shown that Kiasan is relatively sound and complete and the test case generation algorithm is sound.
115

Structured interrelations of component architectures

Jung, Georg January 1900 (has links)
Doctor of Philosophy / Department of Computing and Information Sciences / John M. Hatcliff / Software architectures—abstract interrelation models which decompose complex artifacts into modular functional units and specify the connections and relationships among them—have become an important factor in the development and maintenance of large scale, heterogeneous, information and computation systems. In system development, software architecture design has become a main starting point, and throughout the life-cycle of a system, conformance to the architecture is important to guarantee a system's integrity and consistency. For an effective use of software architectures in initial development and ongoing maintenance, the interrelation models themselves have to be clear, consistent, well structured, and—in case substantial functionality has to be added, reduced, or changed at any stage of the life cycle—flexible and manipulable. Further, enforcing the conformance of a software artifact to its architecture is a non-trivial task. Implementation units need to be identifiable and their association to the abstract constructs of the architecture has to be maintained. Finally, since software architectures can be employed at many different levels of abstraction, with some architectures describing systems that span over multiple different computing platforms, associations have to be flexible and abstractions have to be general enough to capture all parts and precise enough to be useful. An efficient and widely used way to employ software architecture in practice are middleware-based component architectures. System development within this methodology relies on the presence of a service layer called middleware which usually resides between operating system (possibly spanning over multiple operating systems on various platforms) and the application described by the architecture. The uniform set of logistic services provided by a middleware allows that communication and context requirements of the functional units, called components, can be expressed in terms of those services and therefore more shortly and concisely than without such a layer. Also, component development in the middleware context can focus on high-level functionality since the low-level logistics is provided by the middleware. While type systems have proved effective for enforcing structural constraints in programs and data structures, most architectural modeling frameworks include only weak notions of typing or rely on first-order logic constraint languages instead. Nevertheless, a consequent, adherent, use of typing can seamlessly enforce a wide range of constraints crucial for the structural integrity of architectures and the computation systems specified by them without the steep learning curve associated with first-order logic. Also, type systems scale better than first-order logic both in use and understandability/legibility as well as in computational complexity. This thesis describes component-oriented architecture modeling with CADENA and introduces the CADENA Architecture Language with Meta-modeling (CALM). CALM uses multi-level type systems to specify complex interaction models and enforce a variety of structural properties and consistency constraints relevant for the development of large-scale component-based systems. Further, CALM generalizes the notion of middleware-based architectures and uniformly captures and maintains complex interrelated architectures integrated on multiple, differing, middleware platforms. CADENA is a robust and extensible tool based on the concepts and notions of CALM that has been used to specify a number of industrial-strength component models and applied in multiple industrial research projects on model-driven development and software product lines.
116

A model driven data gathering algorithm for Wireless Sensor Networks

Kunnamkumarath, Dhinu Johnson January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / Gurdip Singh / Wireless sensor networks are characterized by severe energy constraints, one to many flows and low rate redundant data. Most of the routing algorithms for traditional networks are address centric, and the ad hoc nature of wireless sensor network makes them unsuitable for practical applications. Also the algorithms designed for mobile ad hoc networks are unsuitable for wireless sensor networks due to severe energy constraints that require nodes to perform for months with limited resources, as well as the low data rate which the constraint implies. This thesis examines a model driven data gathering algorithm framework for wireless sensor networks. It was designed with a goal to decrease the overall cost in transmission by lowering the number of messages transmitted in the network. A combination of data- centric and address-centric approaches was used as guidelines during the design process. A shortest path heuristic where intermediate nodes forward interest messages whenever it is of lower cost is one of the heuristics used. Another heuristic used is the greedy incremental approach to build a lower cost tree from a graph with various producers and consumers. A cost division heuristic is used to divide cost of shared path into distinct paths as the path forks in a tree. This thesis analyzes the effects of these heuristics on the performance of the algorithm and how it lowers the overall cost with the addition of each heuristic.
117

Entity extraction, animal disease-related event recognition and classification from web

Volkova, Svitlana January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / William H. Hsu / Global epidemic surveillance is an essential task for national biosecurity management and bioterrorism prevention. The main goal is to protect the public from major health threads. To perform this task effectively one requires reliable, timely and accurate medical information from a wide range of sources. Towards this goal, we present a framework for epidemiological analytics that can be used to extract and visualize infectious disease outbreaks from the variety of unstructured web sources automatically. More precisely, in this thesis, we consider several research tasks including document relevance classification, entity extraction and animal disease-related event recognition in the veterinary epidemiology domain. First, we crawl web sources and classify collected documents by topical relevance using supervised learning algorithms. Next, we propose a novel approach for automated ontology construction in the veterinary medicine domain. Our approach is based on semantic relationship discovery using syntactic patterns. We then apply our automatically-constructed ontology for the domain-specific entity extraction task. Moreover, we compare our ontology-based entity extraction results with an alternative sequence labeling approach. We introduce a sequence labeling method for the entity tagging that relies on syntactic feature extraction using a sliding window. Finally, we present our novel sentence-based event recognition approach that includes three main steps: entity extraction of animal diseases, species, locations, dates and the confirmation status n-grams; event-related sentence classification into two categories - suspected or confirmed; automated event tuple generation and aggregation. We show that our document relevance classification results as well as entity extraction and disease-related event recognition results are significantly better compared to the results reported by other animal disease surveillance systems.
118

LDA based approach for predicting friendship links in live journal social network

Parimi, Rohit January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / Doina Caragea / The idea of socializing with other people of different backgrounds and cultures excites the web surfers. Today, there are hundreds of Social Networking sites on the web with millions of users connected with relationships such as "friend", "follow", "fan", forming a huge graph structure. The amount of data associated with the users in these Social Networking sites has resulted in opportunities for interesting data mining problems including friendship link and interest predictions, tag recommendations among others. In this work, we consider the friendship link prediction problem and study a topic modeling approach to this problem. Topic models are among the most effective approaches to latent topic analysis and mining of text data. In particular, Probabilistic Topic models are based upon the idea that documents can be seen as mixtures of topics and topics can be seen as mixtures of words. Latent Dirichlet Allocation (LDA) is one such probabilistic model which is generative in nature and is used for collections of discrete data such as text corpora. For our link prediction problem, users in the dataset are treated as "documents" and their interests as the document contents. The topic probabilities obtained by modeling users and interests using LDA provide an explicit representation for each user. User pairs are treated as examples and are represented using a feature vector constructed from the topic probabilities obtained with LDA. This vector will only capture information contained in the interests expressed by the users. Another important source of information that is relevant to the link prediction task is given by the graph structure of the social network. Our assumption is that a user "A" might be a friend of user "B" if a) users "A" and "B" have common or similar interests b) users "A" and "B" have some common friends. While capturing similarity between interests is taken care by the topic modeling technique, we use the graph structure to find common friends. In the past, the graph structure underlying the network has proven to be a trustworthy source of information for predicting friendship links. We present a comparison of predictions from feature sets constructed using topic probabilities and the link graph separately, with a feature set constructed using both topic probabilities and link graph.
119

Study on the performance of ontology based approaches to link prediction in social networks as the number of users increases

Phanse, Shruti January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / Doina Caragea / Recent advances in social network applications have resulted in millions of users joining such networks in the last few years. User data collected from social networks can be used for various data mining problems such as interest recommendations, friendship recommendations and many more. Social networks, in general, can be seen as a huge directed network graph representing users of the network (together with their information, e.g., user interests) and their interactions (also known as friendship links). Previous work [Hsu et al., 2007] on friendship link prediction has shown that graph features contain important predictive information. Furthermore, it has been shown that user interests can be used to improve link predictions, if they are organized into an explicitly or implicitly ontology [Haridas, 2009; Parimi, 2010]. However, the above mentioned previous studies have been performed using a small set of users in the social network LiveJournal. The goal of this work is to study the performance of the ontology based approach proposed in [Haridas, 2009], when number of users in the dataset is increased. More precisely, we study the performance of the approach in terms of performance for data sets consisting of 1000, 2000, 3000 and 4000 users. Our results show that the performance generally increases with the number of users. However, the problem becomes quickly intractable from a computation time point of view. As a part of our study, we also compare our results obtained using the ontology-based approach [Haridas, 2009] with results obtained with the LDA based approach in [Parimi, 2010], when such results are available.
120

Tackling the problems of diversity in recommender systems

Karanam, Manikanta Babu January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / William H. Hsu / A recommender system is a computational mechanism for information filtering, where users provide recommendations (in the form of ratings or selecting items) as inputs, which the system then aggregates and directs to appropriate recipients. With the advent of web based media and publicity methods, the age where standardized methods of publicity, sales, production and marketing strategies do not. As such, in many markets the users are given a wide range of products and information to choose which product they like, to find a way out of this recommender systems are used in a way similar to the live social scenario, that is a user tries to get reviews from friends before opting for a product in a similar way recommender system tries to be a friend who recommends the options. Most of the recommender systems currently developed solely accuracy driven, i.e., reducing the Mean Absolute Error (MAE) between the predictions of the recommender system and actual ratings of the user. This leads to various problems for recommender systems such as lack of diversity and freshness. Lack of diversity arises when the recommender system is overly focused on accuracy by recommending a set of items, in which all of the items are too similar to each other, because they are predicted to be liked by the user. Lack of freshness also arises with overly focusing on accuracy but as a limitation on the set of items recommended making it overly predictable. This thesis work is directed at addressing the issues of diversity, by developing an approach, where a threshold of accuracy (in terms of Mean Absolute Error in prediction) is maintained while trying to diversify the set of item recommendations. Here for the problem of diversity a combination of Attribute-based diversification and user preference based diversification is done. This approach is then evaluated using non-classical methods along with evaluating the base recommender algorithm to prove that diversification is indeed is possible with a mixture of collaborative and content based approach.

Page generated in 0.0465 seconds