151 |
Association Learning Via Deep Neural NetworksLandeen, Trevor J. 01 May 2018 (has links)
Deep learning has been making headlines in recent years and is often portrayed as an emerging technology on a meteoric rise towards fully sentient artificial intelligence. In reality, deep learning is the most recent renaissance of a 70 year old technology and is far from possessing true intelligence. The renewed interest is motivated by recent successes in challenging problems, the accessibility made possible by hardware developments, and dataset availability.
The predecessor to deep learning, commonly known as the artificial neural network, is a computational network setup to mimic the biological neural structure found in brains. However, unlike human brains, artificial neural networks, in most cases cannot make inferences from one problem to another. As a result, developing an artificial neural network requires a large number of examples of desired behavior for a specific problem. Furthermore, developing an artificial neural network capable of solving the problem can take days, or even weeks, of computations.
Two specific problems addressed in this dissertation are both input association problems. One problem challenges a neural network to identify overlapping regions in images and is used to evaluate the ability of a neural network to learn associations between inputs of similar types. The other problem asks a neural network to identify which observed wireless signals originated from observed potential sources and is used to assess the ability of a neural network to learn associations between inputs of different types.
The neural network solutions to both problems introduced, discussed, and evaluated in this dissertation demonstrate deep learning’s applicability to problems which have previously attracted little attention.
|
152 |
Organisering, matematiskt innehåll och feedback i specialundervisning : En kvalitativ fallstudie av några specialpedagogers matematikundervisningBäckström, Inger January 2008 (has links)
Sammanfattning Detta är en kvalitativ fallstudie av fem specialpedagogers arbete med specialundervisning i matematik. Syftet är att kartlägga deras arbete och uppfattningar genom att beskriva och analysera hur de organiserar sin undervisning i matematik, hur de undervisar ämnesinnehållet och hur de ger feedback till eleverna i klassrummet, samt hur de själva beskriver det de uppfattar som det specialpedagogiska inslaget i sin undervisning. För insamling av empirin har en kvalitativ metod med halvstrukturerade djupintervjuer samt observationer i form av ljudinspelningar och ostrukturerade fältanteckningar under lektioner gjorts. Ramfaktorteorier, fenomenografiska teorier, inlärningsteorier samt fallstudien, har styrt mitt sätt att bearbeta och analysera det empiriska materialet. Resultatet visar att det lärande som erbjuds eleverna av fyra av specialpedagogerna bidrar till ett ytinriktat lärande, och feedback som erbjuds bidrar till yttre motivation. Det specialpedagogiska inslaget anser de vara att se människan bakom ett beteende, att ha en positiv förväntan, att utgå från barnet, att vara personlig, att tycka om eleverna och att vara konkret i undervisningen. Den femte pedagogen erbjuder ett djupinriktat lärande och erbjuder ett arbetssätt under lektionerna som skapar vilja och motivation att lära sig. Det specialpedagogiska inslaget anser hon vara att ta reda på var eleven befinner sig kunskapsmässigt och utgå därifrån så eleven har möjlighet att förstå det den inte har förstått. / Summary This is a qualitative case study of five special educators work with special education in mathematics. The aim is to identify their work and ideas by describing and analyzing how they organize their teaching of mathematics, how they teach the subject matter, how they give feedback to students in the classroom, how they describe what they perceive to be the special education component of their teaching. For the collection of empirical data, a qualitative approach with semi-structured interviews and observations in the form of audio recordings and unstructured field notes during classes was used. Frame factor theory, phenomenographic theory, learning theories and case study have leaded me through the way of processing and analyzing the empirical material. Results show that the learning that four of the special educators offer the students contributes to a surface approach to learning, and the feedback they offer contributes to external motivation. The special education components they say are important: to see the person behind a behavior, have positive expectations, let the teaching be based on the child’s experience, to be personal, to like the students and to be concrete in teaching. The fifth special educator offers the students a contribution to deep approach to learning and she offers a way to work at the lessons that creates motivation and willingness to learn. The special education component she thinks is important is to find out what knowledge the child has, and work from there so the child has the possibility to understand what he or she hasn’t understood.
|
153 |
Body-Environment Dialogue : Using Somatic Experiences to Improve Political Decision MakingSidorenko, Alisa January 2015 (has links)
Humankind is facing global ecological problems and resulting from these social issues, while continually destroying the ecosystems which are the life-support mechanisms of the planet and human civilization. The socio-economic system is largely influenced by top-down decision making. Political decisions are a high leverage in sustainability issues, but contemporarily they are conducted in the reductionist way, focusing on short-term profit and jeopardizing the planet and people in the long run. The thesis explores the ways of integrating more holistic approach into political decision making. The study describes the connection between cognitive processes (e.g. learning or decision making) and somatic experiences: human decisions are considered a dynamic product of interaction between the cognition, body and environment. The theory of deep learning helps to understand how decision making can be transformed, and embodied cognitive science explains what facilitates the process of deep learning. The study develops the concept of “body-environment dialogue” — the somatic and cognitive integration of an agent and the context through which the agent receives non-verbal information processed then into the agent’s inner knowledge. The way of processing the information, unlike analytical thinking, is grounded into mindfulness and reflection. It results in the holistic insight about the global socio-ecological system and its interconnections, awakes intrinsic values and causes the change in one’s decisions and actions. Embodied experiences and connection with natural environment are considered the ways to facilitate deep learning which, in turn, affects decision making. The empirical part of the research tests the possibility to affect decision making through embodied contact with nature and the local context. The experimental study project based on 3-day outdoor experiential course demonstrates a certain change in the participants’ decision making as well as illustrates the challenges and drawbacks of such approach.
|
154 |
Modeling time-series with deep networksLängkvist, Martin January 2014 (has links)
No description available.
|
155 |
Reducing animator keyframesHolden, Daniel January 2017 (has links)
The aim of this doctoral thesis is to present a body of work aimed at reducing the time spent by animators manually constructing keyframed animation. To this end we present a number of state of the art machine learning techniques applied to the domain of character animation. Data-driven tools for the synthesis and production of character animation have a good track record of success. In particular, they have been adopted thoroughly in the games industry as they allow designers as well as animators to simply specify the high-level descriptions of the animations to be created, and the rest is produced automatically. Even so, these techniques have not been thoroughly adopted in the film industry in the production of keyframe based animation [Planet, 2012]. Due to this, the cost of producing high quality keyframed animation remains very high, and the time of professional animators is increasingly precious. We present our work in four main chapters. We first tackle the key problem in the adoption of data-driven tools for key framed animation - a problem called the inversion of the rig function. Secondly, we show the construction of a new tool for data-driven character animation called the motion manifold - a representation of motion constructed using deep learning that has a number of properties useful for animation research. Thirdly, we show how the motion manifold can be extended as a general tool for performing data-driven animation synthesis and editing. Finally, we show how these techniques developed for keyframed animation can also be adapted to advance the state of the art in the games industry.
|
156 |
The Effect of Teaching with Stories on Associate Degree Nursing Students' approach to Learning and Reflective PracticeJanuary 2012 (has links)
abstract: This action research study is the culmination of several action cycles investigating cognitive information processing and learning strategies based on students approach to learning theory and assessing students' meta-cognitive learning, motivation, and reflective development suggestive of deep learning. The study introduces a reading assignment as an integrative teaching method with the purpose of challenging students' assumptions and requiring them to think from multiple perspectives thus influencing deep learning. The hypothesis is that students who are required to critically reflect on their own perceptions will develop the deep learning skills needed in the 21st century. Pre and post surveys were used to assess for changes in students' preferred approach to learning and reflective practice styles. Qualitative data was collected in the form of student stories and student literature circle transcripts to further describe student perceptions of the experience. Results indicate stories that include examples of critical reflection may influence students to use more transformational types of reflective learning actions. Approximately fifty percent of the students in the course increased their preference for deep learning by the end of the course. Further research is needed to determine the effect of narratives on student preferences for deep learning. / Dissertation/Thesis / Ed.D. Leadership and Innovation 2012
|
157 |
Multi-person Pose Estimation in Soccer Videos with Convolutional Neural NetworksSkyttner, Axel January 2018 (has links)
Pose estimation is the problem of detecting poses of people in images, multiperson pose estimation is the problem of detecting poses of multiple persons in images. This thesis investigates multi-person pose estimation by applying the associative embedding method on images from soccer videos. Three models are compared, first a pre-trained model, second a fine-tuned model and third a model extended to handle image sequences. The pre-trained method performed well on soccer images and the fine-tuned model performed better then the pre-trained model. The image sequence model performed equally as the fine-tuned model but not better. This thesis concludes that the associative embedding model is a feasible option for pose estimation in soccer videos and should be further researched.
|
158 |
Klasifikace na množinách bodů v 3D / Klasifikace na množinách bodů v 3DStřelský, Jakub January 2018 (has links)
Increasing interest for classification of 3D geometrical data has led to discov- ery of PointNet, which is a neural network architecture capable of processing un- ordered point sets. This thesis explores several methods of utilizing conventional point features within PointNet and their impact on classification. Classification performance of the presented methods was experimentally evaluated and com- pared with a baseline PointNet model on four different datasets. The results of the experiments suggest that some of the considered features can improve clas- sification effectiveness of PointNet on difficult datasets with objects that are not aligned into canonical orientation. In particular, the well known spin image rep- resentations can be employed successfully and reliably within PointNet. Further- more, a feature-based alternative to spatial transformer, which is a sub-network of PointNet responsible for aligning misaligned objects into canonical orientation, have been introduced. Additional experiments demonstrate that the alternative might be competitive with spatial transformer on challenging datasets. 1
|
159 |
Techniques d'analyse de contenu appliquées à l'imagerie spatiale / Machine learning applied to remote sensing imagesLe Goff, Matthieu 20 October 2017 (has links)
Depuis les années 1970, la télédétection a permis d’améliorer l’analyse de la surface de la Terre grâce aux images satellites produites sous format numérique. En comparaison avec les images aéroportées, les images satellites apportent plus d’information car elles ont une couverture spatiale plus importante et une période de revisite courte. L’essor de la télédétection a été accompagné de l’émergence des technologies de traitement qui ont permis aux utilisateurs de la communauté d’analyser les images satellites avec l’aide de chaînes de traitement de plus en plus automatiques. Depuis les années 1970, les différentes missions d’observation de la Terre ont permis d’accumuler une quantité d’information importante dans le temps. Ceci est dû notamment à l’amélioration du temps de revisite des satellites pour une même région, au raffinement de la résolution spatiale et à l’augmentation de la fauchée (couverture spatiale d’une acquisition). La télédétection, autrefois cantonnée à l’étude d’une seule image, s’est progressivement tournée et se tourne de plus en plus vers l’analyse de longues séries d’images multispectrales acquises à différentes dates. Le flux annuel d’images satellite est supposé atteindre plusieurs Péta octets prochainement. La disponibilité d’une si grande quantité de données représente un atout pour développer de chaines de traitement avancées. Les techniques d’apprentissage automatique beaucoup utilisées en télédétection se sont beaucoup améliorées. Les performances de robustesse des approches classiques d’apprentissage automatique étaient souvent limitées par la quantité de données disponibles. Des nouvelles techniques ont été développées pour utiliser efficacement ce nouveau flux important de données. Cependant, la quantité de données et la complexité des algorithmes mis en place nécessitent une grande puissance de calcul pour ces nouvelles chaînes de traitement. En parallèle, la puissance de calcul accessible pour le traitement d’images s’est aussi accrue. Les GPUs («Graphic Processing Unit ») sont de plus en plus utilisés et l’utilisation de cloud public ou privé est de plus en plus répandue. Désormais, pour le traitement d’images, toute la puissance nécessaire pour les chaînes de traitements automatiques est disponible à coût raisonnable. La conception des nouvelles chaînes de traitement doit prendre en compte ce nouveau facteur. En télédétection, l’augmentation du volume de données à exploiter est devenue une problématique due à la contrainte de la puissance de calcul nécessaire pour l’analyse. Les algorithmes de télédétection traditionnels ont été conçus pour des données pouvant être stockées en mémoire interne tout au long des traitements. Cette condition est de moins en moins respectée avec la quantité d’images et leur résolution. Les algorithmes de télédétection traditionnels nécessitent d’être revus et adaptés pour le traitement de données à grande échelle. Ce besoin n’est pas propre à la télédétection et se retrouve dans d’autres secteurs comme le web, la médecine, la reconnaissance vocale,… qui ont déjà résolu une partie de ces problèmes. Une partie des techniques et technologies développées par les autres domaines doivent encore être adaptées pour être appliquée aux images satellites. Cette thèse se focalise sur les algorithmes de télédétection pour le traitement de volumes de données massifs. En particulier, un premier algorithme existant d’apprentissage automatique est étudié et adapté pour une implantation distribuée. L’objectif de l’implantation est le passage à l’échelle c’est-à-dire que l’algorithme puisse traiter une grande quantité de données moyennant une puissance de calcul adapté. Enfin, la deuxième méthodologie proposée est basée sur des algorithmes récents d’apprentissage automatique les réseaux de neurones convolutionnels et propose une méthodologie pour les appliquer à nos cas d’utilisation sur des images satellites. / Since the 1970s, remote sensing has been a great tool to study the Earth in particular thanks to satellite images produced in digital format. Compared to airborne images, satellite images provide more information with a greater spatial coverage and a short revisit period. The rise of remote sensing was followed by the development of processing technologies enabling users to analyze satellite images with the help of automatic processing chains. Since the 1970s, the various Earth observation missions have gathered an important amount of information over time. This is caused in particular by the frequent revisiting time for the same region, the improvement of spatial resolution and the increase of the swath (spatial coverage of an acquisition). Remote sensing, which was once confined to the study of a single image, has gradually turned into the analysis of long time series of multispectral images acquired at different dates. The annual flow of satellite images is expected to reach several Petabytes in the near future. The availability of such a large amount of data is an asset to develop advanced processing chains. The machine learning techniques used in remote sensing have greatly improved. The robustness of traditional machine learning approaches was often limited by the amount of available data. New techniques have been developed to effectively use this new and important data flow. However, the amount of data and the complexity of the algorithms embedded in the new processing pipelines require a high computing power. In parallel, the computing power available for image processing has also increased. Graphic Processing Units (GPUs) are increasingly being used and the use of public or private clouds is becoming more widespread. Now, all the power required for image processing is available at a reasonable cost. The design of the new processing lines must take this new factor into account. In remote sensing, the volume of data currently available for exploitation has become a problem due to the constraint of the computing power required for the analysis. Traditional remote sensing algorithms have often been designed for data that can be stored in internal memory throughout processing. This condition is violated with the quantity of images and their resolution taken into account. Traditional remote sensing algorithms need to be reviewed and adapted for large-scale data processing. This need is not specific to remote sensing and is found in other sectors such as the web, medicine, speech recognition ... which have already solved some of these problems. Some of the techniques and technologies developed by the other domains still need to be adapted to be applied to satellite images. This thesis focuses on remote sensing algorithms for processing massive data volumes. In particular, a first algorithm of machine learning is studied and adapted for a distributed implementation. The aim of the implementation is the scalability, i.e. the algorithm can process a large quantity of data with a suitable computing power. Finally, the second proposed methodology is based on recent algorithms of learning convolutional neural networks and proposes a methodology to apply them to our cases of use on satellite images.
|
160 |
Image Reconstruction, Classification, and Tracking for Compressed Sensing Imaging and VideoJanuary 2016 (has links)
abstract: Compressed sensing (CS) is a novel approach to collecting and analyzing data of all types. By exploiting prior knowledge of the compressibility of many naturally-occurring signals, specially designed sensors can dramatically undersample the data of interest and still achieve high performance. However, the generated data are pseudorandomly mixed and must be processed before use. In this work, a model of a single-pixel compressive video camera is used to explore the problems of performing inference based on these undersampled measurements. Three broad types of inference from CS measurements are considered: recovery of video frames, target tracking, and object classification/detection. Potential applications include automated surveillance, autonomous navigation, and medical imaging and diagnosis.
Recovery of CS video frames is far more complex than still images, which are known to be (approximately) sparse in a linear basis such as the discrete cosine transform. By combining sparsity of individual frames with an optical flow-based model of inter-frame dependence, the perceptual quality and peak signal to noise ratio (PSNR) of reconstructed frames is improved. The efficacy of this approach is demonstrated for the cases of \textit{a priori} known image motion and unknown but constant image-wide motion.
Although video sequences can be reconstructed from CS measurements, the process is computationally costly. In autonomous systems, this reconstruction step is unnecessary if higher-level conclusions can be drawn directly from the CS data. A tracking algorithm is described and evaluated which can hold target vehicles at very high levels of compression where reconstruction of video frames fails. The algorithm performs tracking by detection using a particle filter with likelihood given by a maximum average correlation height (MACH) target template model.
Motivated by possible improvements over the MACH filter-based likelihood estimation of the tracking algorithm, the application of deep learning models to detection and classification of compressively sensed images is explored. In tests, a Deep Boltzmann Machine trained on CS measurements outperforms a naive reconstruct-first approach.
Taken together, progress in these three areas of CS inference has the potential to lower system cost and improve performance, opening up new applications of CS video cameras. / Dissertation/Thesis / Doctoral Dissertation Electrical Engineering 2016
|
Page generated in 0.2324 seconds