• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 264
  • Tagged with
  • 476
  • 476
  • 476
  • 214
  • 178
  • 169
  • 155
  • 130
  • 128
  • 98
  • 88
  • 87
  • 70
  • 65
  • 62
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.

Community detection in complex networks

Bidoni, Zeynab Bahrami 01 July 2015 (has links)
This research study has produced advances in the understanding of communities within a complex network. A community in this context is defined as a subgraph with a higher internal density and a lower crossing density with respect to other subgraphs. In this study, a novel and efficient distance-based ranking algorithm called the Correlation Density Rank (CDR) has been proposed and is utilized for a broad range of applications, such as deriving the community structure and the evolution graph of the organizational structure from a dynamic social network, extracting common members between overlapped communities, performance-based comparison between different service providers in a wireless network, and finding optimal reliability-oriented assignment tasks to processors in heterogeneous distributed computing systems. The experiments, conducted on both synthetic and real datasets, demonstrate the feasibility and applicability of the framework.

Early Detection of Online Auction Opportunistic Sellers Through the Use of Negative-Positive Feedback

Reinert, Gregory J. 01 January 2010 (has links)
Apparently fraud is a growth industry. The monetary losses from Internet fraud have increased every year since first officially reported by the Internet Crime Complaint Center (IC3) in 2000. Prior research studies and third-party reports of fraud show rates substantially higher than eBay’s reported negative feedback rate of less than 1%. The conclusion is most buyers are withholding reports of negative feedback. Researchers Nikitov and Stone in a forensic case study of a single opportunistic eBay seller found buyers sometimes embedded negative comments in positive feedback as a means of avoiding retaliation from sellers and damage to their reputation. This category of positive feedback was described as “negative-positive” feedback. An example of negative-positive type feedback is “Good product, but slow shipping.” This research study investigated the concept of using negative-positive type feedback as a signature to identify potential opportunistic sellers in an online auction population. As experienced by prior researchers using data extracted from the eBay web site, the magnitude of data to be analyzed in the proposed study was massive. The nature of the analysis required - judgment of seller behavior and contextual analysis of buyer feedback comments – could not be automated. The traditional method of using multiple dedicated human raters would have taken months of labor with a correspondingly high labor cost. Instead, crowdsourcing in the form of Amazon Mechanical Turk was used to reduce the analysis time to a few days and at a fraction of the traditional labor cost. The research’s results found that the presence of subtle buyer behavior in the form of negative-positive type feedback comments are an inter-buyer signal indicating that a seller was behaving fraudulently. Sellers with negative-positive type feedback were 1.82 times more likely to be fraudulent. A correlation exists between an increasing number of negative-positive type feedback comments and an increasing probability that a seller was acting fraudulently. For every one unit increase in the number of negative-positive type feedback comments a seller was 4% more likely to be fraudulent.

A sem-odb application for the western cultures database

Ghersgorin, Raquel 21 July 1998 (has links)
This thesis presents the evolution of the Western Cultures Database. The project starts with a database design using a Semantic modeling, and continues with the implementation following two techniques: a Relational and a Semantic approach. The project continues with them in parallel, reaching a point where the Relational is left aside because of the advantages of the Semantic (Sem-ODB) approach. The Semantic implementation produces as a result the Western Culture Semantic Database Application - web interface (the main contribution of this thesis). The database is created and populated using Sem ODB and the web interface is built using WebRG (report generator), HTML, JavaScript and JavaChart (applets for graphical representation). The resulting semantic application permits the storage and retrieval of data, the display of reports and the graphical representation of the data through a Web interface. All of these to support research assertions about the impact of historical figures in Western Cultures.

GeoExpert - An Expert System Based Framework for Data Quality in Spatial Databases

Kumar, Aditya 01 August 2006 (has links)
Usage of very large sets of historical spatial data in knowledge discovery process became a common trend, and in order to obtain better results from this knowledge discovery process the data should be of high quality. In this thesis we proposed a framework 'GeoExpert' for data quality assessment and cleansing tool for spatial data that integrates the spatial data visualization and analysis capabilities of the ARCGIS, the reason and inference capability of an expert system. In this thesis we implemented the proposed framework both stand-alone and web versions using ArcGIS Engine and ArcGIS Server, respectively. We used JESS expert system shell for the expert system part of the GeoExpert. Use of expert system shell separates the application logic from the actual framework which makes the framework easily updatable and domain independent. In this thesis we implemented the GeoExpert on the spatially referenced water quality data.

Efficient storage and retrieval of georeferenced objects in a semantic database for web-based applications

Davis, Debra Lee 20 November 2000 (has links)
The use and dissemination of remotely-sensed data is an important resource that can be used for environmental, commercial and educational purposes. Because of this, the use and availability of remotely-sensed data has increased dramatically in recent years. This usefulness, however, is often overshadowed by the difficulty encountered with trying to deal with this type of data. The amount of data available is immense. Storing, searching and retrieving the data of interest is often difficult, time consuming and inefficient. This is particularly true when these types of data need to be rapidly and continually accessed via the Internet, or combined with other types of remotely-sensed data, such as combining Aerial Photography with US Census vector data. This thesis addresses some of these difficulties, a two-fold approach has been taken. First, a database schema which can store various types of remotely-sensed data in one database has been designed for use in a Semantic Object-Oriented Database System (Sem-ODB). This database schema includes in its design a linear addressing scheme for remotely-sensed objects which maps an object’s 2-dimentional (latitude/longitude) location information to a 1-dimensional integrated integer value. The advantages of using this Semantic schema with remotely-sensed data is discussed and the use of this addressing scheme to rapidly search for and retrieve point-based vector data is investigated. In conjunction with this, an algorithm for transforming a remotely-sensed range search into a number of linear segments of objects in the 1-dimensional array is investigated. The main issues and the combination of solutions involved are discussed.

Efficient Data Structures for Text Processing Applications

Abedin, Paniz 01 December 2021 (has links) (PDF)
This thesis is devoted to designing and analyzing efficient text indexing data structures and associated algorithms for processing text data. The general problem is to preprocess a given text or a collection of texts into a space-efficient index to quickly answer various queries on this data. Basic queries such as counting/reporting a given pattern's occurrences as substrings of the original text are useful in modeling critical bioinformatics applications. This line of research has witnessed many breakthroughs, such as the suffix trees, suffix arrays, FM-index, etc. In this work, we revisit the following problems: 1. The Heaviest Induced Ancestors problem 2. Range Longest Common Prefix problem 3. Range Shortest Unique Substrings problem 4. Non-Overlapping Indexing problem For the first problem, we present two new space-time trade-offs that improve the space, query time, or both of the existing solutions by roughly a logarithmic factor. For the second problem, our solution takes linear space, which improves the previous result by a logarithmic factor. The techniques developed are then extended to obtain an efficient solution for our third problem, which is newly formulated. Finally, we present a new framework that yields efficient solutions for the last problem in both cache-aware and cache-oblivious models.

On the security of NoSQL cloud database services

Ahmadian, Mohammad 01 January 2017 (has links)
Processing a vast volume of data generated by web, mobile and Internet-enabled devices, necessitates a scalable and flexible data management system. Database-as-a-Service (DBaaS) is a new cloud computing paradigm, promising a cost-effective and scalable, fully-managed database functionality meeting the requirements of online data processing. Although DBaaS offers many benefits it also introduces new threats and vulnerabilities. While many traditional data processing threats remain, DBaaS introduces new challenges such as confidentiality violation and information leakage in the presence of privileged malicious insiders and adds new dimension to the data security. We address the problem of building a secure DBaaS for a public cloud infrastructure where, the Cloud Service Provider (CSP) is not completely trusted by the data owner. We present a high level description of several architectures combining modern cryptographic primitives for achieving this goal. A novel searchable security scheme is proposed to leverage secure query processing in presence of a malicious cloud insider without disclosing sensitive information. A holistic database security scheme comprised of data confidentiality and information leakage prevention is proposed in this dissertation. The main contributions of our work are: (i) A searchable security scheme for non-relational databases of the cloud DBaaS; (ii) Leakage minimization in the untrusted cloud. The analysis of experiments that employ a set of established cryptographic techniques to protect databases and minimize information leakage, proves that the performance of the proposed solution is bounded by communication cost rather than by the cryptographic computational effort.

Towards More Efficient Collaborative Distributed Data Analysis and Learning

Liu, Zixia 01 January 2022 (has links) (PDF)
Modern information era gives rise to the persistent generation of large amounts of data with rapid speed and broad geographical distribution. Obtaining knowledge and understanding via analysis and learning from such data have invaluable worth. Features of such data analytical tasks commonly include: data can be large scale and geographically distributed; computing capability demand can be enormous; tasks can be time-critical; some data can be private; participants can have heterogeneous capabilities and non-IID data; and multiple simultaneously submitted data analytical tasks can be possible. These bring challenges to contemporary computing infrastructure and learning models. In view of this, we develop techniques with the purpose of tackling above challenges together towards more efficient collaborative distributed data analysis and learning. We propose a hierarchical framework that supports data analytics on multiple Apache Spark clusters. We propose reinforcement learning based resource management approaches to improve overall efficiency and reduce deadline violations for scheduling general and time-critical data analytical workflows among computing resources. We establish a new hybrid framework for efficient privacy-preserving federated learning and further propose an algorithm upon it for improving asynchronous federated learning of heterogeneous participants having non-IID data. We also propose an asynchronous stochastic gradient descent algorithm for general distributed learning of heterogeneous participants having non-IID data with convergence analysis. Experiments have shown the efficacy of our proposed approaches.

Performance analysis of a distributed file system

Mukhopadhyay, Meenakshi 01 January 1990 (has links)
An important design goal of a distributed file system, a component of many distributed systems, is to provide UNIX file access semantics, e.g., the result of any write system call is visible by all processes as soon as the call completes. In a distributed environment, these semantics are difficult to implement because processes on different machines do not share kernel cache and data structures. Strong data consistency guarantees may be provided only at the expense of performance. This work investigates the time costs paid by AFS 3.0, which uses a callback mechanism to provide consistency guarantees, and those paid by AFS 4.0 which uses typed tokens for synchronization. AFS 3.0 provides moderately strong consistency guarantees, but they are not like UNIX because data are written back to the server only after a file is closed. AFS 4.0 writes back data to the server whenever there are other clients wanting to access it, the effect being like UNIX file access semantics. Also, AFS 3.0 does not guarantee synchronization of multiple writers, whereas AFS 4.0 does.

Video Game Development Strategies for Creating Successful Cognitively Challenging Games

Williams, Walter K. 01 January 2018 (has links)
The video game industry is a global multibillion dollar industry with millions of players. The process of developing video games is essential for the continued growth of the industry, and developers need to employ effective strategies that will help them to create successful games. The purpose of this explorative qualitative single case study was to investigate the design strategies of video game developers who have successfully created video games that are challenging, entertaining, and successful. The technology acceptance model served as a conceptual framework. The entire population for this study was members of a video game development team from a small successful video game development company in North Carolina. The data collection process included interviews with 7 video game developers and analysis of 7 organizational documents. Member checking was used to increase the validity of the findings from the participants. Through the use of triangulation, 4 major themes were identified in the study: the video game designer has a significant impact on the development process, the development process for successful video games follows iterative agile programming methods, programming to challenge cognition is not a target goal for developers, and receiving feedback is essential to the process. The findings in this study may benefit future video game developers and organizations to develop strategies for developing successful games that entertain and challenge players while ensuring the viability of the organization. Findings may influence society as they demonstrate where the points of interest should be directed concerning the impact of video games upon behavior of the players.

Page generated in 0.4371 seconds