• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • 1
  • Tagged with
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Förtätning och stadskvaliteter: En studie om hur förtätning kan skapa stadskvaliteter

Sarka, Julius, Persson, Gustav January 2016 (has links)
Further expansion of the city, such as the one triggered by modernism, is no longer a reasonable option. Planners and scientists have now realised the negative effects of urban sprawl. These among others include larger distances and lack of facilities. The effect of modernism can be noted in Malmö as population density has decreased between 1960 and 1990. Today Malmö expresses a desire to increase population density by letting the city grow within its existing borders. Malmö sees this as a possibility to create a more vibrant and safer urban environment. Urban densification is therefore seen as a mean to improve existing urban environment. This study aims to investigate how densification creates urban qualities and thus a better urban environment. To understand this, we have reviewed planning documents produced by Malmö’s planning office as well as literature about the subject. We have also interviewed planners at Malmö’s planning office. Based on the planning documents, interviews and literature we have found the urban qualities proximity, encounters and safety. These urban qualities can be acquired by densification, but only if it is combined with mixed use.
2

Incorporating Sparse Attention Mechanism into Transformer for Object Detection in Images / Inkludering av gles attention i en transformer för objektdetektering i bilder

Duc Dao, Cuong January 2022 (has links)
DEtection TRansformer, DETR, introduces an innovative design for object detection based on softmax attention. However, the softmax operation produces dense attention patterns, i.e., all entries in the attention matrix receive a non-zero weight, regardless of their relevance for detection. In this work, we explore several alternatives to softmax to incorporate sparsity into the architecture of DETR. Specifically, we replace softmax with a sparse transformation from the α-entmax family: sparsemax and entmax-1.5, which induce a set amount of sparsity, and α-entmax, which treats sparsity as a learnable parameter of each attention head. In addition to evaluating the effect on detection performance, we examine the resulting attention maps from the perspective of explainability. To this end, we introduce three evaluation metrics to quantify the sparsity, complementing the qualitative observations. Although our experimental results on the COCO detection dataset do not show an increase in detection performance, we find that learnable sparsity provides more flexibility to the model and produces more explicative attention maps. To the best of our knowledge, we are the first to introduce learnable sparsity into the architecture of transformer-based object detectors. / DEtection Transformer, DETR, introducerar en innovativ design för objektdetektering baserad på softmax attention. Softmax producerar tät attention, alla element i attention-matrisen får en vikt skild från noll, oberoende av deras relevans för objektdetektering. Vi utforskar flera alternativ till softmax för att inkludera gleshet i DETRs arkitektur. Specifikt så ersätter vi softmax med en gles transformation från α-entmax familjen: sparsemax och entmax1.5, vilka inducerar en fördefinierad mängd gleshet, och α-entmax, som ser gleshet som en träningsbar parameter av varje attention-huvud. Förutom att evaluera effekten på detekteringsprestandan, så utforskar vi de resulterande attention-matriserna från ett förklarbarhetsperspektiv. Med det som mål så introducerar vi tre olika metriker för att evaluera gleshet, som ett komplement till de kvalitativa observationerna. Trots att våra experimentella resultat på COCO, ett utmanande dataset för objektdetektering, inte visar en ökning i detekteringsprestanda, så finner vi att träningsbar gleshet ökar modellens flexibilitet, och producerar mer förklarbara attentionmatriser. Såvitt vi vet så är vi de första som introducerar träningsbar gleshet i transformer-baserade arkitekturer för objektdetektering.
3

Modelling synaptic rewiring in brain-like neural networks for representation learning / Modellering av synaptisk omkoppling i hjärnliknande neurala nätverk för representationsinlärning

Bhatnagar, Kunal January 2023 (has links)
This research investigated the concept of a sparsity method inspired by the principles of structural plasticity in the brain in order to create a sparse model of the Bayesian Confidence Propagation Neural Networks (BCPNN) during the training phase. This was done by extending the structural plasticity in the implementation of the BCPNN. While the initial algorithm presented two synaptic states (Active and Silent), this research extended it to three synaptic states (Active, Silent and Absent) with the aim to enhance sparsity configurability and emulate a more brain-like algorithm, drawing parallels with synaptic states observed in the brain. Benchmarking was conducted using the MNIST and Fashion-MNIST dataset, where the proposed threestate model was compared against the previous two-state model in terms of representational learning. The findings suggest that the three-state model not only provides added configurability but also, in certain low-sparsity settings, showcases similar representational learning abilities as the two-state model. Moreover, in high-sparsity settings, the three-state model demonstrates a commendable balance between accuracy and sparsity trade-off. / Denna forskning undersökte en konceptuell metod för gleshet inspirerad av principerna för strukturell plasticitet i hjärnan för att skapa glesa BCPNN. Forskningen utvidgade strukturell plasticitet i en implementering av BCPNN. Medan den ursprungliga algoritmen presenterade två synaptiska tillstånd (Aktiv och Tyst), utvidgade denna forskning den till tre synaptiska tillstånd (Aktiv, Tyst och Frånvarande) med målet att öka konfigurerbarheten av sparsitet och efterlikna en mer hjärnliknande algoritm, med paralleller till synaptiska tillstånd observerade i hjärnan. Jämförelse gjordes med hjälp av MNIST och Fashion-MNIST datasetet, där det föreslagna tre-tillståndsmodellen jämfördes med den tidigare tvåtillståndsmodellen med avseende på representationslärande. Resultaten tyder på att tre-tillståndsmodellen inte bara ger ökad konfigurerbarhet utan också, i vissa lågt glesa inställningar, visar samma inlärningsförmåga som två-tillståndsmodellen. Dessutom visar den tre-tillståndsmodellen i högsparsamma inställningar en anmärkningsvärd balans mellan noggrannhet och avvägningen mellan sparsitet.

Page generated in 0.0386 seconds