• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 8
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

The Design of Web-Oriented Distributed Post-Flight Data Processing Network System

Dang, Huaiyi, Zhang, Junmin, Wang, Jianjun 10 1900 (has links)
ITC/USA 2009 Conference Proceedings / The Forty-Fifth Annual International Telemetering Conference and Technical Exhibition / October 26-29, 2009 / Riviera Hotel & Convention Center, Las Vegas, Nevada / It talks about a distributed net-based flight test raw data processing system, web-oriented and application oriented. The system likes a normal one, consists of database servers, web servers and NAS storage server, but with the particular distributed task scheduler servers and the calculation servers. Each type server can be a team. The user can use WEB browser with the help of OCX control to setup his own processing task according to his need, choose which plane, which flight no., and defining the parameters, flight time segments, extracting rate etc to be processed. The system can accomplish the processing using the embedded application middleware, various data processing modules in database, with the scheduler servers and processing servers. The system can meet many users' demand of huge quantity non-structural flight raw data quickly and efficient processing at the short time, ensure the flight data enhanced management, to keep from copying and distributing the great quantity raw data inefficiently and out-of-management.
2

A Case for Protecting Huge Pages from the Kernel

Patel, Naman January 2016 (has links) (PDF)
Modern architectures support multiple size pages to facilitate applications that use large chunks of contiguous memory either for buffer allocation, application specific memory management, in-memory caching or garbage collection. Most general purpose processors support larger page sizes, for e.g. x86 architecture supports 2MB and 1GB pages while PowerPC architecture supports 64KB, 16MB, 16GB pages. Such larger size pages are also known as superpages or huge pages. With the help of huge pages TLB reach can be increased significantly. The Linux kernel can transparently use these huge pages to significantly bring down the cost of TLB translations. With Transparent Huge Pages (THP) support in Linux kernel the end users or the application developers need not make any change to their application. Memory fragmentation which has been one of the classical problems in computing systems for decades is a key problem for the allocation of huge pages. Ubiquitous huge page support across architectures makes effective fragmentation management even more critical for modern systems. Applications tend to stress system TLB in the absence of huge pages, for virtual to physical address translation, which adversely affects performance/energy characteristics in long running systems. Since most kernel pages tend to be unmovable, fragmentation created due to their misplacement is more problematic and nearly impossible to recover with memory compaction. In this work, we explore physical memory manager of Linux and the interaction of kernel page placement with fragmentation avoidance and recovery mechanisms. Our analysis reveals that not only a random kernel page layout thwarts the progress of memory compaction; it can actually induce more fragmentation in the system. To address this problem, we propose a new allocator which takes special care for the placement of kernel pages. We propose a new region which represents memory area having kernel as well as user pages. Using this new region we introduce a staged allocator which with change in fragmentation level adapts and optimizes the kernel page placement. Later we introduce Illuminator which with zero overhead outperforms default kernel in terms of huge page allocation success rate and compaction overhead with respect to each huge page. We also show that huge page allocation is not a one dimensional problem but a two fold concern with how the fragmentation recovery mechanism may potentially interfere with the page clustering policy of allocator and worsen the fragmentation. Our results show that with effective kernel page placements the mixed page block counts reduces upto 70%, which allows our system to allocate 3x-4x huge pages than the default Kernel. Using these additional huge pages we show up to 38% improvement in terms of energy consumed and reduction in execution time up to 39% on standard benchmarks.
3

Digitizing the Parthenon using 3D Scanning : Managing Huge Datasets

Lundgren, Therese January 2004 (has links)
<p>Digitizing objects and environments from real world has become an important part of creating realistic computer graphics. Through the use of structured lighting and laser time-of-flight measurements the capturing of geometric models is now a common process. The result are visualizations where viewers gain new possibilities for both visual and intellectual experiences. </p><p>This thesis presents the reconstruction of the Parthenon temple and its environment in Athens, Greece by using a 3D laser-scanning technique. </p><p>In order to reconstruct a realistic model using 3D scanning techniques there are various phases in which the acquired datasets have to be processed. The data has to be organized, registered and integrated in addition to pre and post processing. This thesis describes the development of a suitable and efficient data processing pipeline for the given data. </p><p>The approach differs from previous scanning projects considering digitizing this large scale object at very high resolution. In particular the issue managing and processing huge datasets is described. </p><p>Finally, the processing of the datasets in the different phases and the resulting 3D model of the Parthenon is presented and evaluated.</p>
4

Digitizing the Parthenon using 3D Scanning : Managing Huge Datasets

Lundgren, Therese January 2004 (has links)
Digitizing objects and environments from real world has become an important part of creating realistic computer graphics. Through the use of structured lighting and laser time-of-flight measurements the capturing of geometric models is now a common process. The result are visualizations where viewers gain new possibilities for both visual and intellectual experiences. This thesis presents the reconstruction of the Parthenon temple and its environment in Athens, Greece by using a 3D laser-scanning technique. In order to reconstruct a realistic model using 3D scanning techniques there are various phases in which the acquired datasets have to be processed. The data has to be organized, registered and integrated in addition to pre and post processing. This thesis describes the development of a suitable and efficient data processing pipeline for the given data. The approach differs from previous scanning projects considering digitizing this large scale object at very high resolution. In particular the issue managing and processing huge datasets is described. Finally, the processing of the datasets in the different phases and the resulting 3D model of the Parthenon is presented and evaluated.
5

Passage à l’échelle des méthodes de recherche sémantique dans les grandes bases d’images / Scalable search engines for content-based image retrieval task in huge image database

Gorisse, David 17 December 2010 (has links)
Avec la révolution numérique de cette dernière décennie, la quantité de photos numériques mise à disposition de chacun augmente plus rapidement que la capacité de traitement des ordinateurs. Les outils de recherche actuels ont été conçus pour traiter de faibles volumes de données. Leur complexité ne permet généralement pas d'effectuer des recherches dans des corpus de grande taille avec des temps de calculs acceptables pour les utilisateurs. Dans cette thèse, nous proposons des solutions pour passer à l'échelle les moteurs de recherche d'images par le contenu. Dans un premier temps, nous avons considéré les moteurs de recherche automatique traitant des images indexées sous la forme d'histogrammes globaux. Le passage à l'échelle de ces systèmes est obtenu avec l'introduction d'une nouvelle structure d'index adaptée à ce contexte qui nous permet d'effectuer des recherches de plus proches voisins approximées mais plus efficaces. Dans un second temps, nous nous sommes intéressés à des moteurs plus sophistiqués permettant d'améliorer la qualité de recherche en travaillant avec des index locaux tels que les points d'intérêt. Dans un dernier temps, nous avons proposé une stratégie pour réduire la complexité de calcul des moteurs de recherche interactifs. Ces moteurs permettent d'améliorer les résultats en utilisant des annotations que les utilisateurs fournissent au système lors des sessions de recherche. Notre stratégie permet de sélectionner rapidement les images les plus pertinentes à annoter en optimisant une méthode d'apprentissage actif. / In this last decade, would the digital revolution and its ancillary consequence of a massive increases in digital picture quantities. The database size grow much faster than the processing capacity of computers. The current search engine which conceived for small data volumes do not any more allow to make searches in these new corpus with acceptable response times for users.In this thesis, we propose scalable content-based image retrieval engines.At first, we considered automatic search engines where images are indexed with global histograms. Secondly, we were interested in more sophisticated engines allowing to improve the search quality by working with bag of feature. In a last time, we proposed a strategy to reduce the complexity of interactive search engines. These engines allow to improve the results by using labels which the users supply to the system during the search sessions.
6

Rio como fomos: políticas culturais de 2001 a 2012

Carvalho, Bruna Gomes Leite de 19 April 2013 (has links)
Submitted by Bruna Gomes Leite de Carvalho (brunaglc@yahoo.com.br) on 2013-05-17T14:47:02Z No. of bitstreams: 1 Dissertação Final.doc-pronto_17Maio.pdf: 2037408 bytes, checksum: 0e43b495f61eb7e795a495cd6d3f81ac (MD5) / Approved for entry into archive by Rafael Aguiar (rafael.aguiar@fgv.br) on 2013-05-28T15:54:48Z (GMT) No. of bitstreams: 1 Dissertação Final.doc-pronto_17Maio.pdf: 2037408 bytes, checksum: 0e43b495f61eb7e795a495cd6d3f81ac (MD5) / Approved for entry into archive by Marcia Bacha (marcia.bacha@fgv.br) on 2013-05-29T14:17:03Z (GMT) No. of bitstreams: 1 Dissertação Final.doc-pronto_17Maio.pdf: 2037408 bytes, checksum: 0e43b495f61eb7e795a495cd6d3f81ac (MD5) / Made available in DSpace on 2013-05-29T14:17:27Z (GMT). No. of bitstreams: 1 Dissertação Final.doc-pronto_17Maio.pdf: 2037408 bytes, checksum: 0e43b495f61eb7e795a495cd6d3f81ac (MD5) Previous issue date: 2013-04-19 / The present work aims to comprehend the main aspects of cultural management implemented by the Municipal Bureau of Culture in the city of Rio de Janeiro (SCM), during the years of 2001 and 2012, reflec ting on the role of the public power nowadays, through the analyses of paper materials and interviews with some of the main characters evolved in this process. Simultaneously, the work proposed by the secretaries that led the folder during that period was examined, identifying the guidelines of their cultural policies as well as their main projects. It was intended to understand as well how the guidelines and goals of the strategi c planning from the city hall ( 2004,2009 e 2012) influenced the type of cu ltural polices established , and how that policy helped in creating the representation of the city that will host a huge event like the 2016 Olympics . In the end, it was shown the similarities of speeches used by the managers of SMC and between their pro jects and choices. / O presente trabalho pretende compreender os principais aspectos da gestão cultural implementada pela Secretaria Municipal de Cultura da cidade do Rio de Janeiro (SMC), no período de 2001 a 2012, refletindo sobre o papel do poder público na atualidade, através da análise de matérias de jornais e entrevistas com alguns dos principais atores envolvidos nesse processo. Concomitantemente, examinou-se o trabalho proposto pelos Secretários que comandaram a pasta nesse período, identificando as diretrizes de suas políticas culturais assim como seus principais projetos. Pretendeu-se compreender também, como as diretrizes e metas dos planejamentos estratégicos da Prefeitura (2004, 2009 e 2012) interferiram no tipo de política cultural estabelecida, e como essa política auxiliou na criação da representação da cidade que será sede de um megaevento como as Olimpíadas de 2016. Ao final, foram mostradas as semelhanças de discursos empregados pelos gestores da SMC, e entre seus projetos e escolhas.
7

Distributed Support Vector Machine With Graphics Processing Units

Zhang, Hang 06 August 2009 (has links)
Training a Support Vector Machine (SVM) requires the solution of a very large quadratic programming (QP) optimization problem. Sequential Minimal Optimization (SMO) is a decomposition-based algorithm which breaks this large QP problem into a series of smallest possible QP problems. However, it still costs O(n2) computation time. In our SVM implementation, we can do training with huge data sets in a distributed manner (by breaking the dataset into chunks, then using Message Passing Interface (MPI) to distribute each chunk to a different machine and processing SVM training within each chunk). In addition, we moved the kernel calculation part in SVM classification to a graphics processing unit (GPU) which has zero scheduling overhead to create concurrent threads. In this thesis, we will take advantage of this GPU architecture to improve the classification performance of SVM.
8

Kulturně společenské centrum u brněnské přehrady - architektonická studie objektů pro kulturně společenské i sportovní akce / The cultural and community centre near the Brno dam - the architectural design of buildings for cultural and social and sports events

Šmihula, Michal January 2010 (has links)
The design of cultural centre is situated in part Kozia Hôrka( well-known city swimming pool), in its advantage takes natural scenery and calm atmosphere of place. Into action of performance brings a message in form of body of reservoir, function of centre is divided into small parts placed in area Kozia Hôrka. Orientation of objects comes mainly from local natural ispirations. Complex is multifunctional in concept, counts with several sorts of culture - sports events. Whereby the main function of swimming pool is preserved and added for higher comfort of inhabitants. Architecture of objects comes from idea of floating leaf on water level and body of reservoir. Objects stylizely illustrate this idea. The design takes the game of solids of organic and strictly ortogonal shapes. Two mutual opposites, in interaction. Objects smoothy and with respect encroach the environment, which is enough marked by human. Simplicity in used materials ( glass, steel, wood ) give transparency and purity to whole solution.

Page generated in 0.0423 seconds