• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 35
  • 8
  • 4
  • 3
  • 2
  • 1
  • Tagged with
  • 64
  • 14
  • 11
  • 9
  • 9
  • 8
  • 8
  • 8
  • 7
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

A Zoomable 3D User Interface using Uniform Grids and Scene Graphs

Rinne, Vidar January 2011 (has links)
Zoomable user interfaces (ZUIs) have been studied for a long time and many applications are built upon them. Most applications, however, only use two dimensions to express the content. This report presents a solution using all three dimensions where the base features are built as a framework with uniform grids and scene graphs as primary data structures. The purpose of these data structures is to improve performance while maintaining flexibility when creating and handling three-dimensional objects. A 3D-ZUI is able to represent the view of the world and its objects in a more lifelike manner. It is possible to interact with the objects much in the same way as in real world. By developing a prototype framework as well as some example applications, the usefulness of 3D-ZUIs is illustrated. Since the framework relies on abstraction and object-oriented principles it is easy to maintain and extend it as needed. The currently implemented data structures are well motivated for a large scale 3D-ZUI in terms of accelerated collision detection and picking and they also provide a flexible base when developing applications. It is possible to further improve performance of the framework, for example by supporting different types of culling and levels of detail
52

Enhancing security in distributed systems with trusted computing hardware

Reid, Jason Frederick January 2007 (has links)
The need to increase the hostile attack resilience of distributed and internet-worked computer systems is critical and pressing. This thesis contributes to concrete improvements in distributed systems trustworthiness through an enhanced understanding of a technical approach known as trusted computing hardware. Because of its physical and logical protection features, trusted computing hardware can reliably enforce a security policy in a threat model where the authorised user is untrusted or when the device is placed in a hostile environment. We present a critical analysis of vulnerabilities in current systems, and argue that current industry-driven trusted computing initiatives will fail in efforts to retrofit security into inherently flawed operating system designs, since there is no substitute for a sound protection architecture grounded in hardware-enforced domain isolation. In doing so we identify the limitations of hardware-based approaches. We argue that the current emphasis of these programs does not give sufficient weight to the role that operating system security plays in overall system security. New processor features that provide hardware support for virtualisation will contribute more to practical security improvement because they will allow multiple operating systems to concurrently share the same processor. New operating systems that implement a sound protection architecture will thus be able to be introduced to support applications with stringent security requirements. These can coexist alongside inherently less secure mainstream operating systems, allowing a gradual migration to less vulnerable alternatives. We examine the effectiveness of the ITSEC and Common Criteria evaluation and certification schemes as a basis for establishing assurance in trusted computing hardware. Based on a survey of smart card certifications, we contend that the practice of artificially limiting the scope of an evaluation in order to gain a higher assurance rating is quite common. Due to a general lack of understanding in the marketplace as to how the schemes work, high evaluation assurance levels are confused with a general notion of 'high security strength'. Vendors invest little effort in correcting the misconception since they benefit from it and this has arguably undermined the value of the whole certification process. We contribute practical techniques for securing personal trusted hardware devices against a type of attack known as a relay attack. Our method is based on a novel application of a phenomenon known as side channel leakage, heretofore considered exclusively as a security vulnerability. We exploit the low latency of side channel information transfer to deliver a communication channel with timing resolution that is fine enough to detect sophisticated relay attacks. We avoid the cost and complexity associated with alternative communication techniques suggested in previous proposals. We also propose the first terrorist attack resistant distance bounding protocol that is efficient enough to be implemented on resource constrained devices. We propose a design for a privacy sensitive electronic cash scheme that leverages the confidentiality and integrity protection features of trusted computing hardware. We specify the command set and message structures and implement these in a prototype that uses Dallas Semiconductor iButtons. We consider the access control requirements for a national scale electronic health records system of the type that Australia is currently developing. We argue that an access control model capable of supporting explicit denial of privileges is required to ensure that consumers maintain their right to grant or withhold consent to disclosure of their sensitive health information in an electronic system. Finding this feature absent in standard role-based access control models, we propose a modification to role-based access control that supports policy constructs of this type. Explicit denial is difficult to enforce in a large scale system without an active central authority but centralisation impacts negatively on system scalability. We show how the unique properties of trusted computing hardware can address this problem. We outline a conceptual architecture for an electronic health records access control system that leverages hardware level CPU virtualisation, trusted platform modules, personal cryptographic tokens and secure coprocessors to implement role based cryptographic access control. We argue that the design delivers important scalability benefits because it enables access control decisions to be made and enforced locally on a user's computing platform in a reliable way.
53

Interpreta??o de dados de GPR com base na hierarquiza??o de superf?cies limitantes e na adapta??o de crit?rios sismoestratigr?ficos

Andrade, Peryclys Raynyere de Oliveira 17 June 2005 (has links)
Made available in DSpace on 2015-03-13T17:08:31Z (GMT). No. of bitstreams: 1 PeryclysROA_Cap1.pdf: 1534050 bytes, checksum: 88145cdf2e8bbe54b546fb696fea947a (MD5) Previous issue date: 2005-06-17 / Due to its high resolution, Ground Penetrating Radar (GPR) has been used to image subsurface sedimentary deposits. Because GPR and Seismic methods share some principles of image construction, the classic seismostratigraphic interpretation method has been also applied as an attempt to interpret GPR data. Nonetheless some advances in few particular contexts, the adaptations from seismic to GPR of seismostratigraphic tools and concepts unsuitable because the meaning given to the termination criteria in seismic stratigraphy do not represent the adequate geologic record in the GPR scale. Essentially, the open question relies in proposing a interpretation method for GPR data which allow not only relating product and sedimentary process in the GPR scale but also identifying or proposing depositional environments and correlating these results with the well known Sequence Stratigraphy cornerstones. The goal of this dissertation is to propose an interpretation methodology of GPR data able to perform this task at least for siliciclastic deposits. In order to do so, the proposed GPR interpretation method is based both on seismostratigraphic concepts and on the bounding surface hierarchy tool from Miall (1988). As consequence of this joint use, the results of GPR interpretation can be associated to the sedimentary facies in a genetic context, so that it is possible to: (i) individualize radar facies and correlate them to the sedimentary facies by using depositional models; (ii) characterize a given depositional system, and (iii) determine its stratigraphic framework highligthing how it evolved through geologic time. To illustrate its use the proposed methodology was applied in a GPR data set from Galos area which is part of the Galinhos spit, located in Rio Grande do Norte state, Northeastern Brazil. This spit presents high lateral sedimentary facies variation, containing in its sedimentary record from 4th to 6th cicles caused by high frequency sea level oscillation. The interpretation process was done throughout the following phases: (i) identification of a vertical facies succession, (ii) characterization of radar facies and its associated sedimentary products, (iii) recognition of the associated sedimentary process in a genetic context, and finally (iv) proposal of an evolutionay model for the Galinhos spit. This model proposes that the Galinhos spit is a barrier island constituted, from base to top, of the following sedimentary facies: tidal channel facies, tidal flat facies, shore facies, and aeolic facies (dunes). The tidal channel facies, in the base, is constituted of lateral accretion bars and filling deposits of the channels. The base facies is laterally truncated by the tidal flat facies. In the foreshore zone, the tidal flat facies is covered by the shore facies which is the register of a sea transgression. Finally, on the top of the stratigraphic column, aeolic dunes are deposited due to areal exposition caused by a sea regression / O Radar Penetrante no Solo (Ground Penetrating Radar GPR) tem sido utilizado para mapear em detalhe dep?sitos sedimentares, devido ? sua alta resolu??o. Pelo fato de que os m?todos GPR e S?smico t?m princ?pios de forma??o de imagens muito semelhantes, o modelo cl?ssico de interpreta??o s?smica, baseado na sismoestratigrafia, tem sido tentativamente utilizado para interpretar dados de GPR. N?o obstante os grandes avan?os j? realizados em contextos particulares, as adapta??es propostas das ferramentas e conceitos da sismoestratigrafia, para o GPR, ainda s?o inadequadas; isto acontece basicamente porque as interpreta??es atribu?das aos padr?es de termina??o, extra?dos da sismoestratigrafia convencional, n?o representam o registro geol?gico na escala de opera??o do GPR. O problema conceitual reside, pois, em propor um m?todo de interpreta??o que permita n?o s? relacionar produto e processo sedimentar, na escala do GPR, mas tamb?m identificar ou propor ambientes deposicionais e correlacionar estes resultados com os blocos construtores da Estratigrafia de Seq??ncias. O objetivo desse trabalho ? propor uma metodologia de interpreta??o de dados de GPR capaz de realizar esta tarefa, pelo menos no contexto de dep?sitos silicicl?sticos. Para este fim, prop?e-se uma interpreta??o de dados de GPR, baseada na adapta??o de termos e conceitos herdados da sismoestratigrafia, em conjunto com uma metodologia de hierarquiza??o de superf?cies limitantes (bounding surfaces), a exemplo da metodologia proposta por Miall (1988). Como conseq??ncia direta desta combina??o, a interpreta??o dos dados de GPR pode ser associada ?s f?cies sedimentares, dentro de um contexto gen?tico, possibilitando assim: (i) individualizar as radarf?cies e correlacion?-las ?s f?cies sedimentares, com base em modelos de f?cies sedimentares; (ii) caracterizar um determinado sistema deposicional e, principalmente, (iii) determinar seu arcabou?o estratigr?fico, mostrando como ele evoluiu no tempo geol?gico. Para exemplificar a utiliza??o desta metodologia de interpreta??o foram utilizados os dados de GPR adquiridos na ?rea de Galos, que faz parte do spit de Galinhos, localizado no Estado do Rio Grande do Norte. Este spit apresenta grande variedade lateral de f?cies sedimentares e cont?m registros sedimentares que s?o sens?veis ?s varia??es de freq??ncia mais alta do n?vel relativo do mar (ciclos de 4? a 6? ordens). O processo de interpreta??o constou das seguintes fases: (i) estabelecimento de uma sucess?o vertical de f?cies, (ii) caracteriza??o dos produtos sedimentares (radarf?cies), (ii) atribui??o de processos sedimentares dentro de um cunho gen?tico, e finalmente (iv) estabelecimento de um modelo evolutivo para o spit de Galinhos. O modelo prop?e que este spit constitui um sistema do tipo Ilhas Barreiras que ? materializado, da base para o topo, por f?cies de estreito de mar? (tidal inlet), f?cies de plan?cie de mar?, f?cies de praia e f?cies e?lica (dunas). A f?cies de estreito de mar?, na base, ? representada por barras de acres??o lateral que preencheram os estreitos de mar?. Estes dep?sitos s?o truncados lateralmente pela f?cies de plan?cie de mar?. Na zona de intermar? antiga, a f?cies de intermar? encontra-se sobreposta pela f?cies de praia, marcando assim um avan?o relativo da linha de costa. Por fim, no topo da coluna estratigr?fica, est?o depositadas dunas e?licas, denotando o recuo da linha de costa e conseq?ente exposi??o a?rea
54

Vizualizace a uživatelské rozhraní pro řídicí systém divadelního jeviště / Visualization and User Interface for Theatre Stage Control System

Kobza, Lukáš January 2013 (has links)
This thesis deals with questions of modelling and 3D visualization. Also, it involves an overview of technical equipment on a theatre stage and control systems of this machinery with accent on user interface and all the interaction with staff. Afterwards, the main topic is the investigation of 3D visualization utilization technology in the field of theatre stage control systems and then the proposal and implementation of the theatre stage 3D visualization application follows in order to increase a clearness and safety of operation with the theatre control system.
55

[en] ENABLING AUTONOMOUS DATA ANNOTATION: A HUMAN-IN-THE-LOOP REINFORCEMENT LEARNING APPROACH / [pt] HABILITANDO ANOTAÇÕES DE DADOS AUTÔNOMOS: UMA ABORDAGEM DE APRENDIZADO POR REFORÇO COM HUMANO NO LOOP

LEONARDO CARDIA DA CRUZ 10 November 2022 (has links)
[pt] As técnicas de aprendizado profundo têm mostrado contribuições significativas em vários campos, incluindo a análise de imagens. A grande maioria dos trabalhos em visão computacional concentra-se em propor e aplicar novos modelos e algoritmos de aprendizado de máquina. Para tarefas de aprendizado supervisionado, o desempenho dessas técnicas depende de uma grande quantidade de dados de treinamento, bem como de dados rotulados. No entanto, a rotulagem é um processo caro e demorado. Uma recente área de exploração são as reduções dos esforços na preparação de dados, deixando-os sem inconsistências, ruídos, para que os modelos atuais possam obter um maior desempenho. Esse novo campo de estudo é chamado de Data-Centric IA. Apresentamos uma nova abordagem baseada em Deep Reinforcement Learning (DRL), cujo trabalho é voltado para a preparação de um conjunto de dados em problemas de detecção de objetos, onde as anotações de caixas delimitadoras são feitas de modo autônomo e econômico. Nossa abordagem consiste na criação de uma metodologia para treinamento de um agente virtual a fim de rotular automaticamente os dados, a partir do auxílio humano como professor desse agente. Implementamos o algoritmo Deep Q-Network para criar o agente virtual e desenvolvemos uma abordagem de aconselhamento para facilitar a comunicação do humano professor com o agente virtual estudante. Para completar nossa implementação, utilizamos o método de aprendizado ativo para selecionar casos onde o agente possui uma maior incerteza, necessitando da intervenção humana no processo de anotação durante o treinamento. Nossa abordagem foi avaliada e comparada com outros métodos de aprendizado por reforço e interação humano-computador, em diversos conjuntos de dados, onde o agente virtual precisou criar novas anotações na forma de caixas delimitadoras. Os resultados mostram que o emprego da nossa metodologia impacta positivamente para obtenção de novas anotações a partir de um conjunto de dados com rótulos escassos, superando métodos existentes. Desse modo, apresentamos a contribuição no campo de Data-Centric IA, com o desenvolvimento de uma metodologia de ensino para criação de uma abordagem autônoma com aconselhamento humano para criar anotações econômicas a partir de anotações escassas. / [en] Deep learning techniques have shown significant contributions in various fields, including image analysis. The vast majority of work in computer vision focuses on proposing and applying new machine learning models and algorithms. For supervised learning tasks, the performance of these techniques depends on a large amount of training data and labeled data. However, labeling is an expensive and time-consuming process. A recent area of exploration is the reduction of efforts in data preparation, leaving it without inconsistencies and noise so that current models can obtain greater performance. This new field of study is called Data-Centric AI. We present a new approach based on Deep Reinforcement Learning (DRL), whose work is focused on preparing a dataset, in object detection problems where the bounding box annotations are done autonomously and economically. Our approach consists of creating a methodology for training a virtual agent in order to automatically label the data, using human assistance as a teacher of this agent. We implemented the Deep Q-Network algorithm to create the virtual agent and developed a counseling approach to facilitate the communication of the human teacher with the virtual agent student. We used the active learning method to select cases where the agent has more significant uncertainty, requiring human intervention in the annotation process during training to complete our implementation. Our approach was evaluated and compared with other reinforcement learning methods and human-computer interaction in different datasets, where the virtual agent had to create new annotations in the form of bounding boxes. The results show that the use of our methodology has a positive impact on obtaining new annotations from a dataset with scarce labels, surpassing existing methods. In this way, we present the contribution in the field of Data-Centric AI, with the development of a teaching methodology to create an autonomous approach with human advice to create economic annotations from scarce annotations.
56

Recourse policies in the vehicle routing problem with stochastic demands

Salavati-Khoshghalb, Majid 09 1900 (has links)
No description available.
57

Proposta metodol?gica para o imageamento, caracteriza??o, parametriza??o e gera??o de modelos virtuais de afloramentos

Souza, Anderson de Medeiros 31 January 2013 (has links)
Made available in DSpace on 2015-02-24T19:48:44Z (GMT). No. of bitstreams: 1 AndersonMS_TESE_inicio_pag69.pdf: 4445825 bytes, checksum: 1053632581a0015bab09d4146df22458 (MD5) Previous issue date: 2013-01-31 / Petr?leo Brasileiro SA - PETROBRAS / The aim of this work was to describe the methodological procedures that were mandatory to develop a 3D digital imaging of the external and internal geometry of the analogue outcrops from reservoirs and to build a Virtual Outcrop Model (VOM). The imaging process of the external geometry was acquired by using the Laser Scanner, the Geodesic GPS and the Total Station procedures. On the other hand, the imaging of the internal geometry was evaluated by GPR (Ground Penetrating Radar).The produced VOMs were adapted with much more detailed data with addition of the geological data and the gamma ray and permeability profiles. As a model for the use of the methodological procedures used on this work, the adapted VOM, two outcrops, located at the east part of the Parnaiba Basin, were selected. On the first one, rocks from the aeolian deposit of the Piaui Formation (Neo-carboniferous) and tidal flat deposits from the Pedra de Fogo Formation (Permian), which arises in a large outcrops located between Floriano and Teresina (Piau?), are present. The second area, located at the National Park of Sete Cidades, also at the Piau?, presents rocks from the Cabe?as Formation deposited in fluvial-deltaic systems during the Late Devonian. From the data of the adapted VOMs it was possible to identify lines, surfaces and 3D geometry, and therefore, quantify the geometry of interest. Among the found parameterization values, a table containing the thickness and width, obtained in canal and lobes deposits at the outcrop Pared?o and Biblioteca were the more relevant ones. In fact, this table can be used as an input for stochastic simulation of reservoirs. An example of the direct use of such table and their predicted radargrams was the identification of the bounding surface at the aeolian sites from the Piau? Formation. In spite of such radargrams supply only bi-dimensional data, the acquired lines followed of a mesh profile were used to add a third dimension to the imaging of the internal geometry. This phenomenon appears to be valid for all studied outcrops. As a conclusion, the tool here presented can became a new methodology in which the advantages of the digital imaging acquired from the Laser Scanner (precision, accuracy and speed of acquisition) were combined with the Total Station procedure (precision) using the classical digital photomosaic technique / Neste trabalho s?o apresentados os procedimentos metodol?gicos necess?rios para realizar o imageamento digital 3D da geometria externa e interna de afloramentos an?logos a reservat?rios, e elaborar seus Modelos Virtuais de Afloramentos (MVA). Para imagear a geometria externa foram utilizados o Laser Scanner, o GPS Geod?sico e a Esta??o Total, enquanto que para imagear a geometria interna foi utilizado o GPR. Nos MVA elaborados foram acrescidas ainda as informa??es geol?gicas e as obtidas nas perfilagens com raios gama e de permeabilidade. Como estudo de caso, para exemplificar os procedimentos metodol?gicos propostos, foram escolhidos dois conjuntos de afloramentos na borda leste da Bacia do Parna?ba. Na primeira ?rea ocorrem rochas de origem e?lica da Forma??o Piau? (Neocarbon?fero) e plan?cie de mar? da Forma??o Pedra de Fogo (Permiano), que afloram em um amplo corte de estrada, situado entre Floriano e Teresina (Piau?). A segunda ?rea, situada no Parque Nacional de Sete Cidades, tamb?m no Piau?, envolve rochas da Forma??o Cabe?as depositadas em sistemas fl?vio-deltaicos, durante o Neodevoniano. Com os MVA elaborados foi poss?vel identificar, linhas, superf?cies e geometrias 3D e, assim, quantificar as geometrias de interesse. Dentre as parametriza??es mais relevantes, ressaltam-se a tabela com valores de espessura e largura, obtidas em dep?sitos de canais e em lobos, nos afloramentos Pared?o e Biblioteca. Esta tabela pode ser utilizada como entrada (input) para simula??o estoc?stica de reservat?rios. Um exemplo da aplica??o direta dos radargramas interpretados foi a identifica??o de superf?cies limitantes, em dep?sitos e?licos, da Forma??o Piau?. Apesar dos radargramas oferecerem apenas dados bidimensionais, a aquisi??o de linhas distribu?das segundo uma malha acrescentou a terceira dimens?o ao imageamento das geometrias internas em todos os afloramentos estudados. ? ainda proposta uma nova metodologia que busca conciliar as vantagens obtidas com o imageamento digital com Laser Scanner (precis?o, acur?cia e velocidade de aquisi??o) e a Esta??o Total (precis?o), com o uso cl?ssico de fotomosaicos digitais
58

Sécurisation d'un lien radio UWB-IR / Security of an UWB-IR Link

Benfarah, Ahmed 10 July 2013 (has links)
Du fait de la nature ouverte et partagée du canal radio, les communications sans fil souffrent de vulnérabilités sérieuses en terme de sécurité. Dans ces travaux de thèse, je me suis intéressé particulièrement à deux classes d’attaques à savoir l’attaque par relais et l’attaque par déni de service (brouillage). La technologie de couche physique UWB-IR a connu un grand essor au cours de cette dernière décennie et elle est une candidate intéressante pour les réseaux sans fil à courte portée. Mon objectif principal était d’exploiter les caractéristiques de la couche physique UWB-IR afin de renforcer la sécurité des communications sans fil. L’attaque par relais peut mettre à défaut les protocoles cryptographiques d’authentification. Pour remédier à cette menace, les protocoles de distance bounding ont été proposés. Dans ce cadre, je propose deux nouveaux protocoles (STHCP : Secret Time-Hopping Code Protocol et SMCP : Secret Mapping Code Protocol) qui améliorent considérablement la sécurité des protocoles de distance bounding au moyen des paramètres de la radio UWB-IR. Le brouillage consiste en l’émission intentionnelle d’un signal sur le canal lors du déroulement d’une communication. Mes contributions concernant le problème de brouillage sont triples. D’abord, j’ai déterminé les paramètres d’un brouilleur gaussien pire cas contre un récepteur UWB-IR non-cohérent. En second lieu, je propose un nouveau modèle de brouillage par analogie avec les attaques contre le système de chiffrement. Troisièmement, je propose une modification rendant la radio UWB-IR plus robuste au brouillage. Enfin, dans une dernière partie de mes travaux, je me suis intéressé au problème d’intégrer la sécurité à un réseau UWB-IR en suivant l’approche d’embedding. Le principe de cette approche consiste à superposer et à transmettre les informations de sécurité simultanément avec les données et avec une contrainte de compatibilité. Ainsi, je propose deux nouvelles techniques d’embedding pour la couche physique UWB-IR afin d’intégrer un service d’authentification. / Due to the shared nature of wireless medium, wireless communications are more vulnerable to security threats. In my PhD work, I focused on two types of threats: relay attacks and jamming. UWB-IR physical layer technology has seen a great development during the last decade which makes it a promising candidate for short range wireless communications. My main goal was to exploit UWB-IR physical layer characteristics in order to reinforce security of wireless communications. By the simple way of signal relaying, the adversary can defeat wireless authentication protocols. The first countermeasure proposed to thwart these relay attacks was distance bounding protocol. The concept of distance bounding relies on the combination of two sides: an authentication cryptographic side and a distance checking side. In this context, I propose two new distance bounding protocols that significantly improve the security of existing distance bounding protocols by means of UWB-IR physical layer parameters. The first protocol called STHCP is based on using secret time-hopping codes. Whereas, the second called SMCP is based on secret mapping codes. Security analysis and comparison to the state of the art highlight various figures of merit of my proposition. Jamming consists in the emission of noise over the channel while communication is taking place and constitutes a major problem to the security of wireless communications. In a first contribution, I have determined worst case Gaussian noise parameters (central frequency and bandwidth) against UWB-IR communication employing PPM modulation and a non-coherent receiver. The metric considered for jammer optimization is the signal-to-jamming ratio at the output of the receiver. In a second contribution, I propose a new jamming model by analogy to attacks against ciphering algorithms. The new model leads to distinguish various jamming scenarios ranging from the best case to the worst case. Moreover, I propose a modification of the UWB-IR physical layer which allows to restrict any jamming problem to the most favorable scenario. The modification is based on using a cryptographic modulation depending on a stream cipher. The new radio has the advantage to combine the resistance to jamming and the protection from eavesdropping. Finally, I focused on the problem of security embedding on an existing UWB-IR network. Security embedding consists in adding security features directly at the physical layer and sending them concurrently with data. The embedding mechanism should satisfy a compatibility concern to existing receivers in the network. I propose two new embedding techniques which rely on the superposition of a pulse orthogonal to the original pulse by the form or by the position. Performances analysis reveal that both embedding techniques satisfy all system design constraints.
59

Automatic Data Allocation, Buffer Management And Data Movement For Multi-GPU Machines

Ramashekar, Thejas 10 1900 (has links) (PDF)
Multi-GPU machines are being increasingly used in high performance computing. These machines are being used both as standalone work stations to run computations on medium to large data sizes (tens of gigabytes) and as a node in a CPU-Multi GPU cluster handling very large data sizes (hundreds of gigabytes to a few terabytes). Each GPU in such a machine has its own memory and does not share the address space either with the host CPU or other GPUs. Hence, applications utilizing multiple GPUs have to manually allocate and managed at a on each GPU. A significant body of scientific applications that utilize multi-GPU machines contain computations inside affine loop nests, i.e., loop nests that have affine bounds and affine array access functions. These include stencils, linear-algebra kernels, dynamic programming codes and data-mining applications. Data allocation, buffer management, and coherency handling are critical steps that need to be performed to run affine applications on multi-GPU machines. Existing works that propose to automate these steps have limitations and in efficiencies in terms of allocation sizes, exploiting reuse, transfer costs and scalability. An automatic multi-GPU memory manager that can overcome these limitations and enable applications to achieve salable performance is highly desired. One technique that has been used in certain memory management contexts in the literature is that of bounding boxes. The bounding box of an array, for a given tile, is the smallest hyper-rectangle that encapsulates all the array elements accessed by that tile. In this thesis, we exploit the potential of bounding boxes for memory management far beyond their current usage in the literature. In this thesis, we propose a scalable and fully automatic data allocation and buffer management scheme for affine loop nests on multi-GPU machines. We call it the Bounding Box based Memory Manager (BBMM). BBMM is a compiler-assisted runtime memory manager. At compile time, it use static analysis techniques to identify a set of bounding boxes accessed by a computation tile. At run time, it uses the bounding box set operations such as union, intersection, difference, finding subset and superset relation to compute a set of disjoint bounding boxes from the set of bounding boxes identified at compile time. It also exploits the architectural capability provided by GPUs to perform fast transfers of rectangular (strided) regions of memory and hence performs all data transfers in terms of bounding boxes. BBMM uses these techniques to automatically allocate, and manage data required by applications (suitably tiled and parallelized for GPUs). This allows It to (1) allocate only as much data (or close to) as is required by computations running on each GPU, (2) efficiently track buffer allocations and hence, maximize data reuse across tiles and minimize the data transfer overhead, (3) and as a result, enable applications to maximize the utilization of the combined memory on multi-GPU machines. BBMM can work with any choice of parallelizing transformations, computation placement, and scheduling schemes, whether static or dynamic. Experiments run on a system with four GPUs with various scientific programs showed that BBMM is able to reduce data allocations on each GPU by up to 75% compared to current allocation schemes, yield at least 88% of the performance of hand-optimized Open CL codes and allows excellent weak scaling.
60

Procedurálně generované město / Procedurally Generated City

Panáček, Petr January 2011 (has links)
This paper deals with problem of procedurally generated city. There are described steps of creation of city. These steps are: road generation, extraction of minimal cycles in graph, division of lots and generation of buildings. Road and buildings are generated by L-system. Our system generate a city from input images, such as height map, map of population density and map of water areas. Proposed approaches are used for implementation of application for generation of city.

Page generated in 0.0632 seconds