1 |
Study on the performance of ontology based approaches to link prediction in social networks as the number of users increasesPhanse, Shruti January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / Doina Caragea / Recent advances in social network applications have resulted in millions of users joining such networks in the last few years. User data collected from social networks can be used for various data mining problems such as interest recommendations, friendship recommendations and many more. Social networks, in general, can be seen as a huge directed network graph representing users of the network (together with their information, e.g., user interests) and their interactions (also known as friendship links). Previous work [Hsu et al., 2007] on friendship link prediction has shown that graph features contain important predictive information. Furthermore, it has been shown that user interests can be used to improve link predictions, if they are organized into an explicitly or implicitly ontology [Haridas, 2009; Parimi, 2010]. However, the above mentioned previous studies have been performed using a small set of users in the social network LiveJournal. The goal of this work is to study the performance of the ontology based approach proposed in [Haridas, 2009], when number of users in the dataset is increased. More precisely, we study the performance of the approach in terms of performance for data sets consisting of 1000, 2000, 3000 and 4000 users. Our results show that the performance generally increases with the number of users. However,
the problem becomes quickly intractable from a computation time point of view. As a part
of our study, we also compare our results obtained using the ontology-based approach [Haridas, 2009] with results obtained with the LDA based approach in [Parimi, 2010], when such results are available.
|
2 |
Real-time rendering of very large 3D scenes using hierarchical mesh simplificationJönsson, Daniel January 2009 (has links)
<p>Captured and generated 3D data can be so large that it creates a problem for today's computers since they do not fit into the main or graphics card memory. Therefore methods for handling and rendering the data must be developed. This thesis presents a way to pre-process and render out-of-core height map data for real time use. The pre-processing uses a mesh decimation API called Simplygon developed by Donya Labs to optimize the geometry. From the height map a normal map can also be created and used at render time to increase the visual quality. In addition to the 3D data textures are also supported. To decrease the time to load an object the normal and texture maps can be compressed on the graphics card prior to rendering. Three different methods for covering gaps are explored of which one turns out to be insufficient for rendering cylindrical equidistant projected data.At render time two threads work in parallel. One thread is used to page the data from the hard drive to the main and graphics card memory. The other thread is responsible for rendering all data. To handle precision errors caused by spatial difference in the data each object receives a local origin and is then rendered relative to the camera. An atmosphere which handles views from both space and ground is computed on the graphics card.The result is an application adapted to current graphics card technology which can page out-of-core data and render a dataset covering the entire earth at 500 meters spatial resolution with a realistic atmosphere.</p>
|
3 |
Real-time rendering of very large 3D scenes using hierarchical mesh simplificationJönsson, Daniel January 2009 (has links)
Captured and generated 3D data can be so large that it creates a problem for today's computers since they do not fit into the main or graphics card memory. Therefore methods for handling and rendering the data must be developed. This thesis presents a way to pre-process and render out-of-core height map data for real time use. The pre-processing uses a mesh decimation API called Simplygon developed by Donya Labs to optimize the geometry. From the height map a normal map can also be created and used at render time to increase the visual quality. In addition to the 3D data textures are also supported. To decrease the time to load an object the normal and texture maps can be compressed on the graphics card prior to rendering. Three different methods for covering gaps are explored of which one turns out to be insufficient for rendering cylindrical equidistant projected data.At render time two threads work in parallel. One thread is used to page the data from the hard drive to the main and graphics card memory. The other thread is responsible for rendering all data. To handle precision errors caused by spatial difference in the data each object receives a local origin and is then rendered relative to the camera. An atmosphere which handles views from both space and ground is computed on the graphics card.The result is an application adapted to current graphics card technology which can page out-of-core data and render a dataset covering the entire earth at 500 meters spatial resolution with a realistic atmosphere.
|
4 |
Parallel Algorithm for Reduction of Data Processing Time in Big DataSilva, Jesús, Hernández Palma, Hugo, Niebles Núẽz, William, Ovallos-Gazabon, David, Varela, Noel 07 January 2020 (has links)
Technological advances have allowed to collect and store large volumes of data over the years. Besides, it is significant that today's applications have high performance and can analyze these large datasets effectively. Today, it remains a challenge for data mining to make its algorithms and applications equally efficient in the need of increasing data size and dimensionality [1]. To achieve this goal, many applications rely on parallelism, because it is an area that allows the reduction of cost depending on the execution time of the algorithms because it takes advantage of the characteristics of current computer architectures to run several processes concurrently [2]. This paper proposes a parallel version of the FuzzyPred algorithm based on the amount of data that can be processed within each of the processing threads, synchronously and independently.
|
5 |
Parallel Coordinates Diagram Implementation in 3D GeometrySuma, Christopher G. January 2018 (has links)
No description available.
|
6 |
Large planetary data visualization using ROAM 2.0Persson, Anders January 2005 (has links)
<p>The problem of estimating an adequate level of detail for an object for a specific view is one of the important problems in computer 3d-graphics and is especially important in real-time applications. The well-known continuous level-of-detail technique, Real-time Optimally Adapting Meshes (ROAM), has been employed with success for almost 10 years but has at present, due to rapid development of graphics hardware, been found to be inadequate. Compared to many other level-of-detail techniques it cannot benefit from the higher triangle throughput available on graphics cards of today.</p><p>This thesis will describe the implementation of the new version of ROAM (informally known as ROAM 2.0) for the purpose of massive planetary data visualization. It will show how the problems of the old technique can be bridged to be able to adapt to newer graphics card while still benefiting from the advantages of ROAM. The resulting implementation that is presented here is specialized on spherical objects and handles both texture and geometry data of arbitrary large sizes in an efficient way.</p>
|
7 |
Large planetary data visualization using ROAM 2.0Persson, Anders January 2005 (has links)
The problem of estimating an adequate level of detail for an object for a specific view is one of the important problems in computer 3d-graphics and is especially important in real-time applications. The well-known continuous level-of-detail technique, Real-time Optimally Adapting Meshes (ROAM), has been employed with success for almost 10 years but has at present, due to rapid development of graphics hardware, been found to be inadequate. Compared to many other level-of-detail techniques it cannot benefit from the higher triangle throughput available on graphics cards of today. This thesis will describe the implementation of the new version of ROAM (informally known as ROAM 2.0) for the purpose of massive planetary data visualization. It will show how the problems of the old technique can be bridged to be able to adapt to newer graphics card while still benefiting from the advantages of ROAM. The resulting implementation that is presented here is specialized on spherical objects and handles both texture and geometry data of arbitrary large sizes in an efficient way.
|
8 |
Contribution à la reconstruction de surfaces complexes à partir d'un grand flot de données non organisées pour la métrologie 3D. / Contribution to complex surfaces reconstruction from large and unorganized datasets for 3D metrology.El hayek, Nadim 18 December 2014 (has links)
Les surfaces complexes ont des applications dans divers domaines tels que ceux de la photonique, de l'énergie, du biomédical, du transport... Par contre, elles posent de véritables défis quant à leur spécification, fabrication et mesure ainsi que lors de l'évaluation de leur défaut de forme. Les processus de fabrication et de mesure de surfaces complexes sont fortement tributaires des dimensions, des tolérances et des formes spécifiées. Afin de rendre exploitable les informations données par le système de mesure, une étape importante de traitement s'impose. Il s'agit ici de la reconstruction de surfaces afin de reconstituer la géométrie et la topologie de la surface sous-jacente et d'en extraire les informations nécessaires pour des besoins de métrologie dimensionnelle (caractéristiques dimensionnelles et évaluation des défauts de forme). Dans la catégorie des surfaces asphériques pour lesquelles un modèle mathématique est associé, le processus de traitement de données géométriques, non nécessairement organisées, se fait par l'association du modèle aux données. Les résidus d'association recherchés en optique sont typiquement de l'ordre du nanomètre. Dans ce cadre, nous proposons l'utilisation de l'algorithme L-BFGS qui n'a encore jamais été utilisé en métrologie. Ce dernier permet de résoudre des problèmes d'optimisation non-linéaires, sans contraintes et d'une manière robuste, automatique et rapide. La méthode L-BFGS reste efficace pour des données contenant plusieurs millions de points. Dans la catégorie des surfaces gauches et notamment des aubes de turbines, la fabrication, la mesure et le traitement sont à une toute autre échelle, sub-micrométrique. Les surfaces gauches ne sont généralement pas définies par un modèle mathématique mais sont représentées par des modèles paramétriques de type B-Spline et/ou NURBS. Dans ce cadre, nous exposons un état de l'art détaillé et proposons une nouvelle approche itérative d'association B-Spline. L'algorithme s'affranchit de tous les problèmes liés à l'initialisation et au paramétrage initial. Par conséquent, un tel algorithme constitue une nouveauté dans ce domaine. Nous établissons une étude approfondie en évoquant les avantages et les limites actuelles de cette approche sur des exemples de courbes fermées en 2D. Nous complétons ensuite cette étude par des perspectives d'amélioration et de généralisation aux surfaces en 3D. / Complex surfaces exhibit real challenges in regard to their design specification, their manufacturing, their measurement and the evaluation of their manufacturing defects. They are classified according to their geometric/shape complexity as well as to their required tolerance. Thus, the manufacturing and measurement processes used are selected accordingly. In order to transcribe significant information from the measured data, a data processing scheme is essential. Here, processing involves surface reconstruction in the aim of reconstituting the underlying geometry and topology to the points and extracting the necessary metrological information (form and/or dimensional errors). For the category of aspherical surfaces, where a mathematical model is available, the processing of the data, which are not necessarily organized, is done by fitting/associating the aspherical model to the data. The sought precision in optics is typically nanometric. In this context, we propose the L-BFGS optimization algorithm, first time used in metrological applications and which allows solving unconstrained, non-linear optimization problems precisely, automatically and fast. The L-BFGS method remains efficient and performs well even in the presence of very large amounts of data.In the category of general freeform surfaces and particularly turbine blades, the manufacturing, measurement and data processing are all at a different scale and require sub-micrometric precision. Freeform surfaces are generally not defined by a mathematical formula but are rather represented using parametric models such as B-Splines and NURBS. We expose a detailed state-of-the-art review of existing reconstruction algorithms in this field and then propose a new active contour deformation of B-Splines approach. The algorithm is independent of problems related to initialization and initial parameterization. Consequently, it is a new algorithm with promising results. We then establish a thorough study and a series of tests to show the advantages and limitations of our approach on examples of closed curves in the plane. We conclude the study with perspectives regarding improvements of the method and its extension to surfaces in 3D.
|
Page generated in 0.059 seconds