• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 505
  • 208
  • 197
  • 162
  • 22
  • Tagged with
  • 1175
  • 768
  • 694
  • 431
  • 431
  • 401
  • 401
  • 398
  • 398
  • 115
  • 115
  • 103
  • 87
  • 86
  • 81
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

A Theory of Shared Understanding for Software Organizations

Aranda Garcia, Jorge 15 February 2011 (has links)
Effective coordination and communication are essential to the success of software organizations, but their study to date has been impaired by theoretical confusion and fragmentation. I articulate a theory that argues that the members of software organizations face a constant struggle to share and negotiate an understanding of their goals, plans, status, and context. This struggle lies at the heart of their coordination and communication problems. The theory proposes an analysis of organizational strategies based on four attributes of interaction that foster the development of shared understanding: synchrony, proximity, proportionality, and maturity. Organizations that have values, structures, and practices which facilitate these qualities find it easier to coordinate and communicate effectively. This argument has serious implications for traditional concepts in our literature. Project lifecycle processes and documentation are poor substitutes for informal but unscalable coordination and communication mechanisms. Practices and tools are valuable to the extent that they enable the development of shared understanding across our criteria. Co-location and group cohesion take advantage of the four criteria and therefore have direct advantages for software teams. Finally, growth is detrimental to the effectiveness of the organization because it hinders the use of small-scale mechanisms and it leads to an undesirable formalization. The theory is supported with empirical evidence collected from five case studies of a wide variety of software organizations, and it has explanatory and predictive power. The thesis links this theory to other current research efforts and shows that it complements and enhances them by providing a more solid theoretical foundation and by reclaiming the relevance of synchronous, proximate, proportionate, and mature interactions in software organizations.
112

Learning Language-vision Correspondences

Jamieson, Michael 15 February 2011 (has links)
Given an unstructured collection of captioned images of cluttered scenes featuring a variety of objects, our goal is to simultaneously learn the names and appearances of the objects. Only a small fraction of local features within any given image are associated with a particular caption word, and captions may contain irrelevant words not associated with any image object. We propose a novel algorithm that uses the repetition of feature neighborhoods across training images and a measure of correspondence with caption words to learn meaningful feature configurations (representing named objects). We also introduce a graph-based appearance model that captures some of the structure of an object by encoding the spatial relationships among the local visual features. In an iterative procedure we use language (the words) to drive a perceptual grouping process that assembles an appearance model for a named object. We also exploit co-occurrences among appearance models to learn hierarchical appearance models. Results of applying our method to three data sets in a variety of conditions demonstrate that from complex, cluttered, real-world scenes with noisy captions, we can learn both the names and appearances of objects, resulting in a set of models invariant to translation, scale, orientation, occlusion, and minor changes in viewpoint or articulation. These named models, in turn, are used to automatically annotate new, uncaptioned images, thereby facilitating keyword-based image retrieval.
113

Querying, Exploring and Mining the Extended Document

Sarkas, Nikolaos 31 August 2011 (has links)
The evolution of the Web into an interactive medium that encourages active user engagement has ignited a huge increase in the amount, complexity and diversity of available textual data. This evolution forces us to re-evaluate our view of documents as simple pieces of text and of document collections as immutable and isolated. Extended documents published in the context of blogs, micro-blogs, on-line social networks, customer feedback portals, can be associated with a wealth of meta-data in addition to their textual component: tags, links, sentiment, entities mentioned in text, etc. Collections of user-generated documents grow, evolve, co-exist and interact: they are dynamic and integrated. These unique characteristics of modern documents and document collections present us with exciting opportunities for improving the way we interact with them. At the same time, this additional complexity combined with the vast amounts of available textual data present us with formidable computational challenges. In this context, we introduce, study and extensively evaluate an array of effective and efficient solutions for querying, exploring and mining extended documents, dynamic and integrated document collections. For collections of socially annotated extended documents, we present an improved probabilistic search and ranking approach based on our growing understanding of the dynamics of the social annotation process. For extended documents, such as blog posts, associated with entities extracted from text and categorical attributes, we enable their interactive exploration through the efficient computation of strong entity associations. Associated entities are computed for all possible attribute value restrictions of the document collection. For extended documents, such as user reviews, annotated with a numerical rating, we introduce a keyword-query refinement approach. The solution enables the interactive navigation and exploration of large result sets. We extend the skyline query to document streams, such as news articles, associated with categorical attributes and partially ordered domains. The technique incrementally maintains a small set of recent, uniquely interesting extended documents from the stream.Finally, we introduce a solution for the scalable integration of structured data sources into Web search. Queries are analysed in order to determine what structured data, if any, should be used to augment Web search results.
114

Online Analysis of High Volume Social Text Streams

Bansal, Nilesh 07 January 2014 (has links)
Social media is one of the most disruptive developments of the past decade. The impact of this information revolution has been fundamental on our society. Information dissemination has never been cheaper and users are increasingly connected with each other. The line between content producers and consumers is blurred, leaving us with abundance of data produced in real-time by users around the world on multitude of topics. In this thesis we study techniques to aid an analyst in uncovering insights from this new media form which is modeled as a high volume social text stream. The aim is to develop practical algorithms with focus on the ability to scale, amenability to reliable operation, usability, and ease of implementation. Our work lies at the intersection of building large scale real world systems and developing theoretical foundation to support the same. We identify three key predicates to enable online methods for analysis of social data, namely : - Persistent Chatter Discovery to explore topics discussed over a period of time, - Cross-referencing Media Sources to initiate analysis using a document as the query, and - Contributor Understanding to create aggregate expertise and topic summaries of authors contributing online. The thesis defines each of the predicates in detail and covers proposed techniques, their practical applicability, and detailed experimental results to establish accuracy and scalability for each of the three predicates. We present BlogScope, the core data aggregation and management platform, developed as part of the thesis to enable implementation of the key predicates in real world setting. The system provides a web based user interface for searching social media conversations and analyzing the results in multitude of ways. BlogScope, and its modified versions, index tens to hundreds of billions of text documents while providing interactive query times. Specifically, BlogScope has been crawling 50 million active blogs with 3.25 billion blog posts. Same techniques have also been successfully tested on a Twitter stream of data, adding thousands of new Tweets every second and archiving over 30 billion documents. The social graph part of our database consists of 26 million Twitter user nodes with 17 billion follower edges. The BlogScope system has been used by over 10,000 unique visitors a day, and the commercial version of the system is used by thousands of enterprise clients globally. As social media continues to evolve at an exponential pace, there is a lot that still needs to be studied. The thesis concludes by outlining some of possible future research directions.
115

Online Analysis of High Volume Social Text Streams

Bansal, Nilesh 07 January 2014 (has links)
Social media is one of the most disruptive developments of the past decade. The impact of this information revolution has been fundamental on our society. Information dissemination has never been cheaper and users are increasingly connected with each other. The line between content producers and consumers is blurred, leaving us with abundance of data produced in real-time by users around the world on multitude of topics. In this thesis we study techniques to aid an analyst in uncovering insights from this new media form which is modeled as a high volume social text stream. The aim is to develop practical algorithms with focus on the ability to scale, amenability to reliable operation, usability, and ease of implementation. Our work lies at the intersection of building large scale real world systems and developing theoretical foundation to support the same. We identify three key predicates to enable online methods for analysis of social data, namely : - Persistent Chatter Discovery to explore topics discussed over a period of time, - Cross-referencing Media Sources to initiate analysis using a document as the query, and - Contributor Understanding to create aggregate expertise and topic summaries of authors contributing online. The thesis defines each of the predicates in detail and covers proposed techniques, their practical applicability, and detailed experimental results to establish accuracy and scalability for each of the three predicates. We present BlogScope, the core data aggregation and management platform, developed as part of the thesis to enable implementation of the key predicates in real world setting. The system provides a web based user interface for searching social media conversations and analyzing the results in multitude of ways. BlogScope, and its modified versions, index tens to hundreds of billions of text documents while providing interactive query times. Specifically, BlogScope has been crawling 50 million active blogs with 3.25 billion blog posts. Same techniques have also been successfully tested on a Twitter stream of data, adding thousands of new Tweets every second and archiving over 30 billion documents. The social graph part of our database consists of 26 million Twitter user nodes with 17 billion follower edges. The BlogScope system has been used by over 10,000 unique visitors a day, and the commercial version of the system is used by thousands of enterprise clients globally. As social media continues to evolve at an exponential pace, there is a lot that still needs to be studied. The thesis concludes by outlining some of possible future research directions.
116

Detecting Prominent Patterns of Activity in Social Media

Mathioudakis, Michail 02 April 2014 (has links)
A large part of the Web, today, consists of online platforms that allow their users to generate digital content. They include online social networks, multimedia-sharing websites, blogging platforms, and online discussion boards, to name a few examples. Users of those platforms generate content in the form of digital items (e.g. documents, images, or videos), inspect content generated by others, and, finally, interact with each other (e.g. by commenting on each other's generated items). For the social process of information exchange they enable, such platforms are customarily referred to as `social media'. Activity on social media is largely spontaneous and uncoordinated, but it is not random; users choose the discussions they engage in and who they interact with, and their choices and actions reflect what they find important. In this thesis, we define and quantify notions of importance for items, users, and social connections between users, and, based on those definitions, propose efficient algorithms to detect important instances of social media activity. Our description of the algorithms is accompanied with experimental studies that showcase their performance on real datasets in terms of efficiency and effectiveness.
117

Detecting Prominent Patterns of Activity in Social Media

Mathioudakis, Michail 02 April 2014 (has links)
A large part of the Web, today, consists of online platforms that allow their users to generate digital content. They include online social networks, multimedia-sharing websites, blogging platforms, and online discussion boards, to name a few examples. Users of those platforms generate content in the form of digital items (e.g. documents, images, or videos), inspect content generated by others, and, finally, interact with each other (e.g. by commenting on each other's generated items). For the social process of information exchange they enable, such platforms are customarily referred to as `social media'. Activity on social media is largely spontaneous and uncoordinated, but it is not random; users choose the discussions they engage in and who they interact with, and their choices and actions reflect what they find important. In this thesis, we define and quantify notions of importance for items, users, and social connections between users, and, based on those definitions, propose efficient algorithms to detect important instances of social media activity. Our description of the algorithms is accompanied with experimental studies that showcase their performance on real datasets in terms of efficiency and effectiveness.
118

Identity and Access Management in Multi-tier Cloud Infrastructure

Faraji, MohammadSadegh 22 November 2013 (has links)
The SAVI IAM is an identity and access management system for the Multi-tier cloud infrastructure. The goal of the SAVI IAM is to provide a exible system to enable applications to adopt the cloud rapidly rather than concentrating on a speci c function such as federation. The SAVI IAM distinguishes itself from previous work in three aspects: comprehensiveness, stability, and technology independence.The SAVI IAM is a comprehensive solution for cloud providers. It uses two ne-grained access control model: constrained Role-based Access Control and Attribute-based access control. To address application requirement, it has implemented delegation and trust mechanism to enable administrators to delegate their authorities temporarily to applications. The SAVI IAM is scalable in the sense than it can address huge number of requests by increasing the number of instances. On the other hand, the middleware component is able to cache local data in order to boost the performance of the the infrastructure. The SAVI IAM is built on top of Openstack Keystone v2.0, and supports Openstack, Amazon EC2, and SAVI APIs.
119

Identity and Access Management in Multi-tier Cloud Infrastructure

Faraji, MohammadSadegh 22 November 2013 (has links)
The SAVI IAM is an identity and access management system for the Multi-tier cloud infrastructure. The goal of the SAVI IAM is to provide a exible system to enable applications to adopt the cloud rapidly rather than concentrating on a speci c function such as federation. The SAVI IAM distinguishes itself from previous work in three aspects: comprehensiveness, stability, and technology independence.The SAVI IAM is a comprehensive solution for cloud providers. It uses two ne-grained access control model: constrained Role-based Access Control and Attribute-based access control. To address application requirement, it has implemented delegation and trust mechanism to enable administrators to delegate their authorities temporarily to applications. The SAVI IAM is scalable in the sense than it can address huge number of requests by increasing the number of instances. On the other hand, the middleware component is able to cache local data in order to boost the performance of the the infrastructure. The SAVI IAM is built on top of Openstack Keystone v2.0, and supports Openstack, Amazon EC2, and SAVI APIs.
120

Indoor Location-based Recommender System

Lin, Zhongduo 04 December 2013 (has links)
WiFi-based indoor localization is emerging as a new positioning technology. In this work, we present our efforts to find the best recommender system based on the indoor location tracks collected from the Bow Valley shopping mall for one week. The time a user spends in a shop is considered as an implicit preference and different mapping algorithms are proposed to map the time to a more realistic rating value. A new distribution error metric is proposed to examine the mapping algorithms. Eleven different recommender systems are built and evaluated in terms of accuracy and execution time. The Slope-One recommender system with a logarithmic mapping algorithm is finally selected with a score of 1.292, distribution error of 0.178 and execution time of 0.39 seconds for ten runs.

Page generated in 0.1165 seconds