• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 27497
  • 5236
  • 1478
  • 1311
  • 1311
  • 1311
  • 1311
  • 1311
  • 1301
  • 1211
  • 867
  • 671
  • 512
  • 158
  • 156
  • Tagged with
  • 43041
  • 43041
  • 14700
  • 11021
  • 3185
  • 2987
  • 2821
  • 2604
  • 2594
  • 2540
  • 2508
  • 2490
  • 2390
  • 2289
  • 2127
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
411

Generalizations and unification of centroid-based clustering methods

Cañas, Daniel Alberto 01 December 2004 (has links)
There are many clustering methods that are referred to as k-means-like. We give the minimal necessary and sufficient components for the mechanism of the k-means (iterative and partitional) clustering method of a finite set of objects, X. Thus k-means is generalized and the methods that mimic k-means are unified. We name these k-center clustering methods. The fundamental mechanism of k-center methods exposes the usual misconceptions of k-means such as (a) ``distance" satisfies some of properties of a mathematical metric, (b) there is a need to measure ``distance" between objects in X, and (c) the centers of clusters have the same nature as the objects of X. Moreover, k-center methods have a common formula to choose or calculate centers of clusters. We characterize the convergent common objective function by expressing it in terms of (a) a distance measure for closeness between center objects and the objects in X and (b) the coherence of clusters. We give a three object example to demonstrate the components of the formal mechanism of a k-center method. We then give examples of various known methods that belong to the class of k-center methods. We exhibit an extensive and thorough comparison of the qualitative k-modes and the numerical spherical k-means. Included are paradigm applications, a matrix environment, an understanding of the duality of a dissimilarity and similarity measure, and an understanding of normalized X and the normalized centers of subsets of X.
412

Coglaborate - An Environment For Collaborative Cognitive Modeling

Cornel, Reuben Francis 24 November 2009 (has links)
Cognitive scientists who build computational models of their work, as exemplified by the ACT-R and Soar research communities, have limited means of sharing knowledge: annual conferences and workshops, summer schools, and model code distributed via Web sites. The consequence is that results obtained by different groups are scattered across the Internet, making it difficult for researchers to obtain a comprehensive view of cognitive modeling research. The goal of my project is to develop a collaborative modeling environment for cognitive scientists in which they can develop and share models. The current system supports collaboration by providing a structured representation for ACT-R cognitive models using frames. The rationale for providing a structured representation for cognitive models is two-fold: it not only provides a mechanism for sharing models (i.e. via consistent APIs); it also enables the application of analytical techniques to cognitive models. As a proof of concept for the approach, a medium-scale modeling application has been developed, integrating an extension of ACT-R developed elsewhere, to solve synonym crossword puzzles.
413

Real-Time Image Based Rendering for Stereo Views of Vegetation

Borse, Jitendra Arun 22 November 2002 (has links)
Rendering of detailed vegetation for real-time applications has always been difficult because of the high polygon count in 3D models. Generating correctly warped images for nonplanar projection surfaces often requires even higher degrees of tessellation. Generating left and right eye views for stereo would further reduce the frame rate since information for a one eye view cannot be used to redraw the vegetation for the other eye view. We describe an image based rendering approach that is a modification of an algorithm for monoscopic rendering of vegetation proposed by Aleks Jauklin. The Jauklin algorithm pre-renders vegetation models from six viewpoints; rendering from an arbitrary viewpoint is achieved by compositing the nearest two slicings. Slices are alpha blended as the user changes viewing positions. The blending produces visual artifacts that are not distracting in a monoscopic environment but are very distracting in a stereo environment. We have modified the algorithm so it displays all pre-rendered images simultaneously, and slicings are partitioned and rendered in a back-to-front order. This approach improves the quality of the stereo, maintains the basic appearance of the vegetation, and reduces visual artifacts but it increases rendering time slightly and produces a rendering that is not totally faithful to the original vegetation model.
414

Utility Guided Pattern Mining

Jagannath, Sandhya 28 November 2003 (has links)
This work is an initial exploration of the use of the decision-theoretic concept of utility to guide pattern mining. We present the use of utility functions as against thresholds and constraints as the mechanism to express user preferences and formulate several pattern mining problems that use utility functions. Utility guided pattern mining provides the twin benefits of capturing user preferences precisely using utility functions and of expressing user focus by choosing an appropriate utility guided pattern mining problem. It addresses the drawbacks of threshold guided pattern mining, the specification of threshold and the assumption of a fixed level of interest. We examine the problem of mining patterns with the best utility values in detail. We examine monotonicity properties of utility functions and the composition of utility functions from sub-utility functions as mechanisms to prune the search space. We also present a top-down approach for generating projected databases from FP-Trees, which is an order of magnitude faster than methods proposed in the literature.
415

Augmentation of Intrusion Detection Systems Through the Use of Bayesian Network Analysis

Williams, Lloyd 03 May 2006 (has links)
The purpose of this research has been to increase the effectiveness of Intrusion Detection Systems in the enforcement of computer security. Current preventative security measures are clearly inadequate as evidenced by constant examples of compromised computer security seen in the news. Intrusion Detection Systems have been created to respond to the inadequacies of existing preventative security methods. This research presents the two main approaches to Intrusion Detection Systems and the reasons that they too fail to produce adequate security. Promising new methods are attempting to increase the effectiveness of Intrusion Detection Systems with one of the most interesting approaches being that taken by the TIAA system. The TIAA system uses a method based on employing prerequisites and consequences of security attacks to glean cohesive collections of attack data from large data sets. The reasons why the TIAA approach ultimately fails are discussed, and the possibility of using the TIAA system as a preprocessor for recognizing novel attacks is then presented along with the types of data this approach will produce. In the course of this research the VisualBayes software package was created to make use of the data generated by the TIAA system. VisualBayes is a complete graphical system for the creation, manipulation, and evaluation of Bayesian networks. The VisualBayes also uses the Bayesian networks to create a visualization of observations and the probabilities that result from them. This is a new feature that has not been seen in other Bayesian systems up to this point.
416

Implementation of DRAND, the Distributed and Scalable TDMA Time Slot Scheduling Algorithm

Min, Jeong Ki 06 December 2005 (has links)
The problem of energy savings is the most important subject currently in the research area of wireless sensor networks. So, in order to present a better scheme for energy savings and system performance, the TDMA scheme is considered as a solution. Moreover, the TDMA time slot scheduling algorithm is an important issue in running the TDMA scheme. The distributed and scalable fashion is required in wireless sensor networks because it is very difficult and inefficient to manage many sensor nodes by the centralized method with small size of memory space and battery capacity on each sensor node deployed in the broad sensing field. So, we implemented DRAND, the TDMA time slot scheduling algorithm which supports the important requirements as we listed above. Even though a scheme shows good performance by the simulation result, the implementation as a real system is another problem to solve. This is because good simulation results could not guarantee that implementation of the algorithm would work properly in the real word due to various unexpected obstacles. Therefore, by implementing the DRAND scheme as a real system, we can confirm the analysis and simulation result with various real experiments. For the experiment, we use up to 42 MICA2 motes for one-hop and multi-hop test.
417

Implementing and Evaluating SCM Algorithms for Rate-Aware Prefetching

Kulkarni, Amit Vasant 06 January 2009 (has links)
File system prefetching has been widely studied and used to hide high latency of disk I/O. However, there are very few algorithms that explicitly take the file access rate or burstiness into account to distribute resources, especially the prefetching memory. In this work we draw parallels between file system prefetching and the field of Supply Chain Management (SCM), particularly Inventory Theory. We further describe two very commonly used algorithms in SCM that directly address access rate and uncertainty. We also implement these prefetching algorithms in the Linux kernel and present the performance results of using these algorithms. Our results show that with these SCM-based algorithms, we can improve the throughput of standard Linux file transfer applications by up to 33% and the throughput of some server workloads (such as Video-on-Demand) by up to 41%.
418

CYBERINFRASTRUCTURE FOR CONTAMINATION SOURCE CHARACTERIZATION IN WATER DISTRIBUTION SYSTEMS

Sreepathi, Sreerama (Sarat) 19 December 2006 (has links)
Urban water distribution systems (WDSs) are vulnerable to accidental and intentional contamination incidents that could result in adverse human health and safety impacts. This thesis research is part of a bigger ongoing cyberinfrastructure project. The overall goal of this project is to develop an adaptive cyberinfrastructure for threat management in urban water distribution systems. The application software core of the cyberinfrastructure consists of various optimization modules and a simulation module. This thesis focuses on the development of specific middleware components of the cyberinfrastructure that enables efficient seamless execution of the core software component in a grid environment. The components developed in this research include: (i) a coarse-grained parallel wrapper for the simulation module that includes additional features for persistent execution and hooks to communicate with the optimization module and the job submission middleware, (ii) a seamless job submission interface, and (iii) a graphical real time application monitoring tool. The threat management problems used in this research is restricted to contaminant source characterization in water distribution systems.
419

Interest-Matching Comparisons using CP-nets

Wicker, Andrew White 03 January 2007 (has links)
The formation of internet-based social networks has revived research on traditional social network models as well as interest-matching, or match-making, systems. In order to automate or augment the process of interest-matching, we follow the trend of qualitative decision theory by using qualitative preference information to represent a user's interests. In particular, a common form of preference statements for humans is used as the motivating factor in the formalization of ceteris paribus preference semantics. This type of preference information led to the development of conditional preference networks (CP-nets). This thesis presents a method for the comparison of CP-net preference orderings which allows one to determine a shared interest level between agents. Empirical results suggest that distance measure for preference orderings represented as CP-nets is an effective method for determining shared interest levels. Furthermore, it is shown that differences in the CP-net structure correspond to differences in the shared interest levels which are consistent with intuition. A generalized Kemeny and Snell axiomatic approach for distance measure of strict partial orderings is used as the foundation on which the interest-matching comparisons are based.
420

Multi-Dimensional Data Set Visualization in Portable Computing Environments

Romeo, Michael John 16 December 2003 (has links)
This thesis studies the issues involved with a graphical presentation of large, multi-dimensional data sets. In particular, it will explore the display of such data sets on low cost, limited capacity portable computing environments (e.g. personal digital assistants, cellular phones, portable gaming devices). After a background discussion of the issues involved with scientific visualization and large multi-dimensional data sets, a presentation of several portable computing environments will be discussed along with graphics implementation packages for those environments. This will be followed by a description and presentation of a working implementation, for Pocket PC handheld devices, along with a discussion of some extensions and further areas of study.

Page generated in 0.0383 seconds