• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 5
  • 5
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Skolans mellanrum / Learning environments and the spaces in between

Chen, Lisa January 2021 (has links)
Mitt examensarbete syftar till att undersöka skolarkitektur med fokus på skolans mellanrum - det som ofta går under benämningen "kommunikationsyta" eller "sociala ytor" i en skolmiljö.  Skolan är en komplex sammansättning av prioriteringar, intentioner och föreställningar om lärande. I Sverige har vi något som kallas skolplikt, vilket innebär att barn som omfattas av den, måste gå i skolan och delta i den verksamhet som anordnas där. Med andra ord är skolan en plats där den huvudsakliga brukargruppen, dvs. eleverna inte alltid valt sin omgivning. En stor utredning visar också  att 84% av all mobbing sker utanför klassrummet. Beroende på ålder sker det oftare i utomhusmiljön där gömda utrymmen finns eller just i korridorer och uppehållsytor där eleverna är när de inte har lektion. Detta säger också något om hur vi behöver ägna mer uppmärksamhet åt Skolans mellanrum.  Ambitionen har därför varit skapa en en sammanhängande F-9 skola som främjar rörelse, nyfikenhet och gåtfullhet i ett kommunikationsstråk genom skolan samt trygghet genom vuxennärvaro och överblickbarhet. Skolan passas in i naturtomt i södra Stockholm och platsens karaktär tas hand om i både skolhusets placering och gestaltning. / My thesis project aims to investigate school architecture and specifically the spaces in between the learning environment which usually goes by the terms "circulation area" or "social areas" in a school. The school is a complex composition of priorities, intentions, and ideas about learning. In Sweden, we have something called compulsory schooling, which means that children covered by it must go to school and participate in the activities that are organized there. In other words, a school is a place where the main user group - the students don't always get to choose their environment. Studies also show that bullying is more common in spaces outside the classroom, i.e. in the schoolyard, in the corridors, or the social spaces where the students are when they're not in class. This says something about the priority these spaces are given in the planning process. The ambition has been to design a cohesive school that promotes movement, curiosity, and social interaction in the circulation spaces by having clear nodes for integration. The teacher and staff spaces are placed strategically along the main circulation space to promote a sense of security through the presence of adults. The project is situated in the southern part of Stockholm and has great qualities in terms of access to nature. The character of the location has inspired both the placement of the school as well as the exterior and interior design and organization.
2

Constructing Computational Models Of Nature For Architecture: A Case On Transcoding The Intelligence Of Cactus

Erdogan, Elif 01 February 2012 (has links) (PDF)
The environment of knowledge exchange between computation and biology elicits a contemporary approach towards architecture. Computation, as an overarching mode of thinking, instructs the analysis, understanding and reinterpretation of the un-formal structure of natural organizations (such as systematic construct, information flow, and process through time) for architectural form generation. Consequently, the computing theory originates a mind-shift where processes, relations, and dependencies are a major concern for reconsidering and re-comprehending the environment. Besides, computation presents universal modes of thinking and tools for modeling, within which transdisciplinary studies and knowledge interchange between distinct disciplines are flourished. This thesis will discuss architectural form generation through interpreting computation as &ldquo / transcoding&rdquo / and an interface, while nature will be regarded as a &ldquo / model&rdquo / and a source for learning. A case study will be conducted by analyzing cactus plants and their common generative logic in the framework of computation. Consequently, the produced computational model of cactus plants will be scrutinized for probable outcomes, questioning what such a re-interpretation of natural systems may imply for architecture.
3

Apprentissage de représentations pour la reconnaissance visuelle / Learning representations for visual recognition

Saxena, Shreyas 12 December 2016 (has links)
Dans cette dissertation, nous proposons des méthodes d’apprentissage automa-tique aptes à bénéficier de la récente explosion des volumes de données digitales.Premièrement nous considérons l’amélioration de l’efficacité des méthodes derécupération d’image. Nous proposons une approche d’apprentissage de métriques locales coordonnées (Coordinated Local Metric Learning, CLML) qui apprends des métriques locales de Mahalanobis, puis les intègre dans une représentation globale où la distance l2 peut être utilisée. Ceci permet de visualiser les données avec une unique représentation 2D, et l’utilisation de méthodes de récupération efficaces basées sur la distance l2. Notre approche peut être interprétée comme l’apprentissage d’une projection linéaire de descripteurs donnés par une méthode a noyaux de grande dimension définie explictement. Cette interprétation permet d’appliquer des outils existants pour l’apprentissage de métriques de Mahalanobis à l’apprentissage de métriques locales coordonnées. Nos expériences montrent que la CLML amé-liore les résultats en matière de récupération de visage obtenues par les approches classiques d’apprentissage de métriques locales et globales.Deuxièmement, nous présentons une approche exploitant les modèles de ré-seaux neuronaux convolutionnels (CNN) pour la reconnaissance faciale dans lespectre visible. L’objectif est l’amélioration de la reconnaissance faciale hétérogène, c’est à dire la reconnaissance faciale à partir d’images infra-rouges avec des images d’entraînement dans le spectre visible. Nous explorerons différentes stratégies d’apprentissage de métriques locales à partir des couches intermédiaires d’un CNN, afin de faire le rapprochement entre des images de sources différentes. Dans nos expériences, la profondeur de la couche optimale pour une tâche donnée est positivement corrélée avec le changement entre le domaine source (données d’entraînement du CNN) et le domaine cible. Les résultats montrent que nous pouvons utiliser des CNN entraînés sur des images du spectre visible pour obtenir des résultats meilleurs que l’état de l’art pour la reconnaissance faciale hétérogène (images et dessins quasi-infrarouges).Troisièmement, nous présentons les "tissus de neurones convolutionnels" (Convolutional Neural Fabrics) permettant l’exploration de l’espace discret et exponentiellement large des architectures possibles de réseaux neuronaux, de manière efficiente et systématique. Au lieu de chercher à sélectionner une seule architecture optimale, nous proposons d’utiliser un "tissu" d’architectures combinant un nombre exponentiel d’architectures en une seule. Le tissu est une représentation 3D connectant les sorties de CNNs à différentes couches, échelles et canaux avec un motif de connectivité locale, homogène et creux. Les seuls hyper-paramètres du tissu (le nombre de canaux et de couches) ne sont pas critiques pour la performance. La nature acyclique du tissu nous permet d’utiliser la rétro-propagation du gradient durant la phase d’apprentissage. De manière automatique, nous pouvons donc configurer le tissu de manière à implémenter l’ensemble de toutes les architectures possibles (un nombre exponentiel) et, plus généralement, des ensembles (combinaisons) de ces modèles. La complexité de calcul et de taille mémoire du tissu évoluent de manière linéaire alors qu’il permet d’exploiter un nombre exponentiel d’architectures en parallèle, en partageant les paramètres entre architectures. Nous présentons des résultats à l’état de l’art pour la classification d’images sur le jeu de données MNIST et CIFAR10, et pour la segmentation sémantique sur le jeu de données Part Labels. / In this dissertation, we propose methods and data driven machine learning solutions which address and benefit from the recent overwhelming growth of digital media content.First, we consider the problem of improving the efficiency of image retrieval. We propose a coordinated local metric learning (CLML) approach which learns local Mahalanobis metrics, and integrates them in a global representation where the l2 distance can be used. This allows for data visualization in a single view, and use of efficient ` 2 -based retrieval methods. Our approach can be interpreted as learning a linear projection on top of an explicit high-dimensional embedding of a kernel. This interpretation allows for the use of existing frameworks for Mahalanobis metric learning for learning local metrics in a coordinated manner. Our experiments show that CLML improves over previous global and local metric learning approaches for the task of face retrieval.Second, we present an approach to leverage the success of CNN models forvisible spectrum face recognition to improve heterogeneous face recognition, e.g., recognition of near-infrared images from visible spectrum training images. We explore different metric learning strategies over features from the intermediate layers of the networks, to reduce the discrepancies between the different modalities. In our experiments we found that the depth of the optimal features for a given modality, is positively correlated with the domain shift between the source domain (CNN training data) and the target domain. Experimental results show the that we can use CNNs trained on visible spectrum images to obtain results that improve over the state-of-the art for heterogeneous face recognition with near-infrared images and sketches.Third, we present convolutional neural fabrics for exploring the discrete andexponentially large CNN architecture space in an efficient and systematic manner. Instead of aiming to select a single optimal architecture, we propose a “fabric” that embeds an exponentially large number of architectures. The fabric consists of a 3D trellis that connects response maps at different layers, scales, and channels with a sparse homogeneous local connectivity pattern. The only hyperparameters of the fabric (the number of channels and layers) are not critical for performance. The acyclic nature of the fabric allows us to use backpropagation for learning. Learning can thus efficiently configure the fabric to implement each one of exponentially many architectures and, more generally, ensembles of all of them. While scaling linearly in terms of computation and memory requirements, the fabric leverages exponentially many chain-structured architectures in parallel by massively sharing weights between them. We present benchmark results competitive with the state of the art for image classification on MNIST and CIFAR10, and for semantic segmentation on the Part Labels dataset
4

Learning Compact Architectures for Deep Neural Networks

Srinivas, Suraj January 2017 (has links) (PDF)
Deep neural networks with millions of parameters are at the heart of many state of the art computer vision models. However, recent works have shown that models with much smaller number of parameters can often perform just as well. A smaller model has the advantage of being faster to evaluate and easier to store - both of which are crucial for real-time and embedded applications. While prior work on compressing neural networks have looked at methods based on sparsity, quantization and factorization of neural network layers, we look at the alternate approach of pruning neurons. Training Neural Networks is often described as a kind of `black magic', as successful training requires setting the right hyper-parameter values (such as the number of neurons in a layer, depth of the network, etc ). It is often not clear what these values should be, and these decisions often end up being either ad-hoc or driven through extensive experimentation. It would be desirable to automatically set some of these hyper-parameters for the user so as to minimize trial-and-error. Combining this objective with our earlier preference for smaller models, we ask the following question - for a given task, is it possible to come up with small neural network architectures automatically? In this thesis, we propose methods to achieve the same. The work is divided into four parts. First, given a neural network, we look at the problem of identifying important and unimportant neurons. We look at this problem in a data-free setting, i.e; assuming that the data the neural network was trained on, is not available. We propose two rules for identifying wasteful neurons and show that these suffice in such a data-free setting. By removing neurons based on these rules, we are able to reduce model size without significantly affecting accuracy. Second, we propose an automated learning procedure to remove neurons during the process of training. We call this procedure ‘Architecture-Learning’, as this automatically discovers the optimal width and depth of neural networks. We empirically show that this procedure is preferable to trial-and-error based Bayesian Optimization procedures for selecting neural network architectures. Third, we connect ‘Architecture-Learning’ to a popular regularize called ‘Dropout’, and propose a novel regularized which we call ‘Generalized Dropout’. From a Bayesian viewpoint, this method corresponds to a hierarchical extension of the Dropout algorithm. Empirically, we observe that Generalized Dropout corresponds to a more flexible version of Dropout, and works in scenarios where Dropout fails. Finally, we apply our procedure for removing neurons to the problem of removing weights in a neural network, and achieve state-of-the-art results in scarifying neural networks.
5

Architektura školy jako vyjádření pedagogického konceptu / School Architecture as an Educational Concept

Chudá, Kateřina January 2017 (has links)
What are the modern architectural trends in foreign and domestic learning enviroments? That is the main question of this thesis that is trying to merge both architectural and pedagogical points of view. Even though this topic is becoming more and more actual, it can still be hardly found in czech literature. This thesis analyses new trends in school projects and how they align to the schools educational concept. The theoretical part explains the terms of educational enviroments, educational concept and shows the main architectural trends in school designs. The analytical part presents ten foreign schools looking into the alignment between architectural concept and educational concept. In the research section nine recently built school projects are presented, eight czech and one austrian primary schools are investigated via interviews and observations. The research comes to a conclusion that according to the new changing educational reguirements, most of the examined school projects implements some new features like relaxation places for pupils and places for improving students social and personal skills. Just few of the examined architectural projects are taking into consideration the educational concept, mostly it is the educational concept that is subordinated to the school building project....

Page generated in 0.0894 seconds