Spelling suggestions: "subject:"show"" "subject:"shop""
81 |
Efeito do \'shot peening\' sobre a nitretação de peças de ferro produzidas por metalurgia do pó / The effect of shot peening on the gas nitriding of iron components produced by powder metallurgyLeonardo Calicchio 02 July 2009 (has links)
Atualmente, quando se tem a necessidade de nitretar peças produzidas pela metalurgia do pó, usa-se a nitretação a plasma. Apesar de ser um processo de alto custo, com diversas dificuldades operacionais e de ajuste de processo, a nitretação a plasma é o único processo viável para nitretar esses materiais por ter uma ação apenas superficial, não apresentando ação nitretante no interior dos poros. Nos processos de nitretação a gás e banho de sais, o meio nitretante penetra na porosidade (interconectada) dos materiais sinterizados, havendo assim a formação de camada branca em uma grande profundidade da peça (ou mesmo na peça toda), gerando problemas de deformação e fragilização do componente. Este trabalho teve por objetivo a aplicação do processo shot peening em peças sinterizadas com a finalidade de fechar a porosidade superficial das peças e estudar seu comportamento sob o processo de nitretação a gás. O estudo verifica que o material sinterizado submetido à nitretação gasosa permitiu a entrada do meio nitretante pelos poros abertos e interconectados promovendo a formação de camada branca no interior dos poros de praticamente todo o volume da peça. Essa camada branca no interior do material fragiliza o componente e inviabiliza sua utilização como componente em praticamente qualquer aplicação industrial. As peças sinterizadas jateadas com granalhas de aço antes da nitretação também permitiram o acesso do meio nitretante no interior do componente, porém, sem potencial suficiente para a formação de camada branca. As amostras jateadas apresentaram apenas agulhas de nitretos formados durante a nitretação. / Plasma nitriding is the process used to nitriding components produced by powder metallurgy. Although its high coast and operational difficulties, this is the best process for this kind of materials because the nitring occurs only in the surface. In gas and liquid nitriding processes the nitriding atmosphere goes through interconnected porous and the white layer forms not only on the surface but around internal porous resulting in embrittlement and deformation of the component. The aim of this work was evaluate the gas nitriding behavior of iron samples as received and previously submitted to shot peening process in order to close superficial porosities in a gas nitriding process. The results have shown that during gas nitriding the samples as received, did not present white layer at the surface but around the porous in the bulk of the sample. This fact suggested that the gaseous atmosphere goes through interconnected porous. This white layer causes the embrittlement of the component and its use in industrial application is not recommended. Otherwise, samples previously submitted to shot peening process before nitriding showed an external white layer but also permitted the access of nitriding atmosphere to the bulk of the sample, but in this case the nitriding potential was not sufficient to form white layer around internal porous.
|
82 |
On Transfer Learning Techniques for Machine LearningDebasmit Das (8314707) 30 April 2020 (has links)
<pre><pre><p>
</p><p>Recent progress in machine learning has been mainly due to
the availability of large amounts of annotated data used for training complex
models with deep architectures. Annotating this training data becomes
burdensome and creates a major bottleneck in maintaining machine-learning
databases. Moreover, these trained models fail to generalize to new categories
or new varieties of the same categories. This is because new categories or new
varieties have data distribution different from the training data distribution.
To tackle these problems, this thesis proposes to develop a family of
transfer-learning techniques that can deal with different training (source) and
testing (target) distributions with the assumption that the availability of
annotated data is limited in the testing domain. This is done by using the
auxiliary data-abundant source domain from which useful knowledge is
transferred that can be applied to data-scarce target domain. This transferable
knowledge serves as a prior that biases target-domain predictions and prevents
the target-domain model from overfitting. Specifically, we explore structural
priors that encode relational knowledge between different data entities, which
provides more informative bias than traditional priors. The choice of the
structural prior depends on the information availability and the similarity
between the two domains. Depending on the domain similarity and the information
availability, we divide the transfer learning problem into four major
categories and propose different structural priors to solve each of these
sub-problems.</p><p>
</p><p>This thesis first focuses on the
unsupervised-domain-adaptation problem, where we propose to minimize domain
discrepancy by transforming labeled source-domain data to be close to unlabeled
target-domain data. For this problem,
the categories remain the same across the two domains and hence we assume that
the structural relationship between the source-domain samples is carried over
to the target domain. Thus, graph or hyper-graph is constructed as the
structural prior from both domains and a graph/hyper-graph matching formulation
is used to transform samples in the source domain to be closer to samples in
the target domain. An efficient optimization scheme is then proposed to tackle
the time and memory inefficiencies associated with the matching problem. The
few-shot learning problem is studied next, where we propose to transfer
knowledge from source-domain categories containing abundantly labeled data to
novel categories in the target domain that contains only few labeled data. The
knowledge transfer biases the novel category predictions and prevents the model
from overfitting. The knowledge is encoded using a neural-network-based prior
that transforms a data sample to its corresponding class prototype. This neural
network is trained from the source-domain data and applied to the target-domain
data, where it transforms the few-shot samples to the novel-class prototypes
for better recognition performance. The few-shot learning problem is then
extended to the situation, where we do not have access to the source-domain
data but only have access to the source-domain class prototypes. In this limited
information setting, parametric neural-network-based priors would overfit to
the source-class prototypes and hence we seek a non-parametric-based prior
using manifolds. A piecewise linear manifold is used as a structural prior to
fit the source-domain-class prototypes. This structure is extended to the
target domain, where the novel-class prototypes are found by projecting the
few-shot samples onto the manifold. Finally, the zero-shot learning problem is
addressed, which is an extreme case of the few-shot learning problem where we
do not have any labeled data in the target domain. However, we have high-level
information for both the source and target domain categories in the form of
semantic descriptors. We learn the relation between the sample space and the
semantic space, using a regularized neural network so that classification of
the novel categories can be carried out in a common representation space. This
same neural network is then used in the target domain to relate the two spaces.
In case we want to generate data for the novel categories in the target domain,
we can use a constrained generative adversarial network instead of a
traditional neural network. Thus, we use structural priors like graphs, neural
networks and manifolds to relate various data entities like samples, prototypes
and semantics for these different transfer learning sub-problems. We explore
additional post-processing steps like pseudo-labeling, domain adaptation and
calibration and enforce algorithmic and architectural constraints to further
improve recognition performance. Experimental results on standard transfer
learning image recognition datasets produced competitive results with respect
to previous work. Further experimentation and analyses of these methods
provided better understanding of machine learning as well.</p><p>
</p></pre></pre>
|
83 |
Apprentissage et exploitation de représentations sémantiques pour la classification et la recherche d'images / Learning and exploiting semantic representations for image classification and retrievalBucher, Maxime 27 November 2018 (has links)
Dans cette thèse nous étudions différentes questions relatives à la mise en pratique de modèles d'apprentissage profond. En effet malgré les avancées prometteuses de ces algorithmes en vision par ordinateur, leur emploi dans certains cas d'usage réels reste difficile. Une première difficulté est, pour des tâches de classification d'images, de rassembler pour des milliers de catégories suffisamment de données d'entraînement pour chacune des classes. C'est pourquoi nous proposons deux nouvelles approches adaptées à ce scénario d'apprentissage, appelé <<classification zero-shot>>.L'utilisation d'information sémantique pour modéliser les classes permet de définir les modèles par description, par opposition à une modélisation à partir d'un ensemble d'exemples, et rend possible la modélisation sans donnée de référence. L'idée fondamentale du premier chapitre est d'obtenir une distribution d'attributs optimale grâce à l'apprentissage d'une métrique, capable à la fois de sélectionner et de transformer la distribution des données originales. Dans le chapitre suivant, contrairement aux approches standards de la littérature qui reposent sur l'apprentissage d'un espace d'intégration commun, nous proposons de générer des caractéristiques visuelles à partir d'un générateur conditionnel. Une fois générés ces exemples artificiels peuvent être utilisés conjointement avec des données réelles pour l'apprentissage d'un classifieur discriminant. Dans une seconde partie de ce manuscrit, nous abordons la question de l'intelligibilité des calculs pour les tâches de vision par ordinateur. En raison des nombreuses et complexes transformations des algorithmes profonds, il est difficile pour un utilisateur d'interpréter le résultat retourné. Notre proposition est d'introduire un <<goulot d'étranglement sémantique>> dans le processus de traitement. La représentation de l'image est exprimée entièrement en langage naturel, tout en conservant l'efficacité des représentations numériques. L'intelligibilité de la représentation permet à un utilisateur d'examiner sur quelle base l'inférence a été réalisée et ainsi d'accepter ou de rejeter la décision suivant sa connaissance et son expérience humaine. / In this thesis, we examine some practical difficulties of deep learning models.Indeed, despite the promising results in computer vision, implementing them in some situations raises some questions. For example, in classification tasks where thousands of categories have to be recognised, it is sometimes difficult to gather enough training data for each category.We propose two new approaches for this learning scenario, called <<zero-shot learning>>. We use semantic information to model classes which allows us to define models by description, as opposed to modelling from a set of examples.In the first chapter we propose to optimize a metric in order to transform the distribution of the original data and to obtain an optimal attribute distribution. In the following chapter, unlike the standard approaches of the literature that rely on the learning of a common integration space, we propose to generate visual features from a conditional generator. The artificial examples can be used in addition to real data for learning a discriminant classifier. In the second part of this thesis, we address the question of computational intelligibility for computer vision tasks. Due to the many and complex transformations of deep learning algorithms, it is difficult for a user to interpret the returned prediction. Our proposition is to introduce what we call a <<semantic bottleneck>> in the processing pipeline, which is a crossing point in which the representation of the image is entirely expressed with natural language, while retaining the efficiency of numerical representations. This semantic bottleneck allows to detect failure cases in the prediction process so as to accept or reject the decision.
|
84 |
Making Video Streaming More Efficient Using Per-Shot EncodingGådin, Douglas, Hermanson, Fanny, Marhold, Anton, Sikström, Joel, Winman, Johan January 2022 (has links)
The demand for streaming high-quality video increases each year and the energy used by consumers is estimated to increase by 23% from 2020 to 2030. The largest contributor to this is increased data transmission. To minimise data transmission, a video encoding method called per-shot encoding can be used, which splits and processes a video into smaller segments called shots. By utilising this method, the bitrate for a video can be reduced without compromising quality. This leads to less data that needs to be transmitted, which reduces energy consumption. In this project, a website that interfaces with a per-shot encoder is implemented. To evaluate the per-shot encoder, both visual quality and bitrate are quantitatively measured. From evaluation, the bitrate is reduced by up to 2.5% for a selection of videos, without compromising the viewing experience. This is a substantial decrease compared to alternative methods. / Efterfrågan av högkvalitativ videoströmning ökar varje år och konsumenters energianvändning uppskattas att ha ökat med 23% år 2030 jämfört med år 2020. Den största orsaken till detta är ökad dataöverföring. För att minska mängden data som behöver skickas kan per-shot-kodning användas, vilket är en videokodningsmetod som delar upp och bearbetar en video i ett flertal mindre delar som kallas shots. Bithastigheten för en video kan minskas med hjälp av per-shot-kodning utan att påverka kvaliteten. Detta leder till att mindre data behöver skickas, vilket innebär minskad energiförbrukning. I detta projekt har en per-shot-kodare tillsammans med en hemsida utvecklats. För att utvärdera per-shot-kodaren kommer skillnad i kvalitet och bithastighet att mätas kvantitativt. Utvärderingen har visat att per-shot-kodaren kan minska bithastigheten med upp till 2.5% för ett urval av videor, utan att påverka tittarupplevelsen. Detta är en avsevärd minskning jämfört med alternativa metoder.
|
85 |
GENERATING SQL FROM NATURAL LANGUAGE IN FEW-SHOT AND ZERO-SHOT SCENARIOSAsplund, Liam January 2024 (has links)
Making information stored in databases more accessible to users inexperienced in structured query language (SQL) by converting natural language to SQL queries has long been a prominent research area in both the database and natural language processing (NLP) communities. There have been numerous approaches proposed for this task, such as encoder-decoder frameworks, semantic grammars, and more recently with the use of large language models (LLMs). When training LLMs to successfully generate SQL queries from natural language questions there are three notable methods used, pretraining, transfer learning and in-context learning (ICL). ICL is particularly advantageous in scenarios where the hardware at hand is limited, time is of concern and large amounts of task specific labled data is nonexistent. This study seeks to evaluate two strategies in ICL, namely zero-shot and few-shot scenarios using the Mistral-7B-Instruct LLM. Evaluation of the few-shot scenarios was conducted using two techniques, random selection and Jaccard Similarity. The zero-shot scenarios served as a baseline for the few-shot scenarios to overcome, which ended as anticipated, with the few-shot scenarios using Jaccard similarity outperforming the other two methods, followed by few-shot scenarios using random selection coming in at second best, and the zero-shot scenarios performing the worst. Evaluation results acquired based on execution accuracy and exact matching accuracy confirm that leveraging similarity in demonstrating examples when prompting the LLM will enhance the models knowledge about the database schema and table names which is used during the inference phase leadning to more accurately generated SQL queries than leveraging diversity in demonstrating examples.
|
86 |
Planejamento de processos de peen forming baseado em modelos analíticos do jato de granalhas e do campo de tensões residuais induzidas na peça. / Peen forming process planning based on analytical models of the shots\' jet and residual stress fields induced on a plate.Leite, Ricardo Augusto de Barros 18 July 2016 (has links)
Peen forming é um processo de conformação plástica a frio de laminas ou painéis metálicos através do impacto de um jato regulado de pequenas esferas de aço em sua superfície, a fim de produzir uma curvatura pré-determinada. A aplicação da técnica de shot peening como um processo de conformação já é conhecida da indústria desde a década de 1940, mas a demanda crescente por produtos de grande confiabilidade tem impulsionado o desenvolvimento de novas pesquisas visando o seu aperfeiçoamento e automação. . O planejamento do processo de peen forming requer medição e controle de diversas variáveis relacionadas à dinâmica do jato de granalhas e à sua interação com o material a ser conformado. Conforme demonstrado por diversos autores, a velocidade de impacto é uma das variáveis que mais contribui para a formação do campo de tensões residuais que leva o material a se curvar. Neste trabalho é apresentado um modelo dinâmico simplificado que descreve o movimento de um grande número de pequenas esferas arrastadas por um fluxo de ar em regime permanente e sujeitas a múltiplas colisões entre si e com a peça a ser conformada. Simulações deste modelo permitiram identificar a correlação entre o campo de velocidades das granalhas e os demais parâmetros do processo. Mediante a aplicação da técnica de projeto de experimentos pôde-se estimar os valores dos parâmetros que otimizam o processo. Ao final, elaborou-se um algoritmo que permite realizar o planejamento de processos de peen forming, ou seja, determinar os valores desses parâmetros, de modo tal a produzir uma curvatura pré-determinada em uma placa metálica originalmente plana. / Peen forming is a plastic cold work process of shaping a metallic sheet or panel through the impact of a regulated blast of small round steel shots on its surface, in order to produce a previously desired curvature. The application of the shot peening as a forming process has been a known technique in the industry since the decade of 1940, but the increasing demand for products of high reliability have pushed the development of new research in order to enhance and automate it. Peen forming process planning requires the measurement and control of several variables concerning the dynamics of the shot jet and its interaction with the piece to be shaped. As previously shown by several authors, impact velocity is one of the variables that most contribute to the development of the residual stress field that causes the material to bend. In this article we present a simplified dynamical model describing the motion of a large number of small spheres (shot) dragged by an air flow in steady conditions and exposed to multiple collisions with each other and with the piece to be shaped. Computer simulations of this model allowed to identify correlations between the shot field velocity and the parameters of the process. Applying design of experiments techniques it was possible to estimate the value of parameters that optimize the process. It was, then, elaborated an algorithm that enables peen forming process planning, allowing the determination of the parameters, in order to make a predetermined bending in a metallic plate originally plane.
|
87 |
Planejamento de processos de peen forming baseado em modelos analíticos do jato de granalhas e do campo de tensões residuais induzidas na peça. / Peen forming process planning based on analytical models of the shots\' jet and residual stress fields induced on a plate.Ricardo Augusto de Barros Leite 18 July 2016 (has links)
Peen forming é um processo de conformação plástica a frio de laminas ou painéis metálicos através do impacto de um jato regulado de pequenas esferas de aço em sua superfície, a fim de produzir uma curvatura pré-determinada. A aplicação da técnica de shot peening como um processo de conformação já é conhecida da indústria desde a década de 1940, mas a demanda crescente por produtos de grande confiabilidade tem impulsionado o desenvolvimento de novas pesquisas visando o seu aperfeiçoamento e automação. . O planejamento do processo de peen forming requer medição e controle de diversas variáveis relacionadas à dinâmica do jato de granalhas e à sua interação com o material a ser conformado. Conforme demonstrado por diversos autores, a velocidade de impacto é uma das variáveis que mais contribui para a formação do campo de tensões residuais que leva o material a se curvar. Neste trabalho é apresentado um modelo dinâmico simplificado que descreve o movimento de um grande número de pequenas esferas arrastadas por um fluxo de ar em regime permanente e sujeitas a múltiplas colisões entre si e com a peça a ser conformada. Simulações deste modelo permitiram identificar a correlação entre o campo de velocidades das granalhas e os demais parâmetros do processo. Mediante a aplicação da técnica de projeto de experimentos pôde-se estimar os valores dos parâmetros que otimizam o processo. Ao final, elaborou-se um algoritmo que permite realizar o planejamento de processos de peen forming, ou seja, determinar os valores desses parâmetros, de modo tal a produzir uma curvatura pré-determinada em uma placa metálica originalmente plana. / Peen forming is a plastic cold work process of shaping a metallic sheet or panel through the impact of a regulated blast of small round steel shots on its surface, in order to produce a previously desired curvature. The application of the shot peening as a forming process has been a known technique in the industry since the decade of 1940, but the increasing demand for products of high reliability have pushed the development of new research in order to enhance and automate it. Peen forming process planning requires the measurement and control of several variables concerning the dynamics of the shot jet and its interaction with the piece to be shaped. As previously shown by several authors, impact velocity is one of the variables that most contribute to the development of the residual stress field that causes the material to bend. In this article we present a simplified dynamical model describing the motion of a large number of small spheres (shot) dragged by an air flow in steady conditions and exposed to multiple collisions with each other and with the piece to be shaped. Computer simulations of this model allowed to identify correlations between the shot field velocity and the parameters of the process. Applying design of experiments techniques it was possible to estimate the value of parameters that optimize the process. It was, then, elaborated an algorithm that enables peen forming process planning, allowing the determination of the parameters, in order to make a predetermined bending in a metallic plate originally plane.
|
88 |
Characterization of pico- and nanosecond electron pulses in ultrafast transmission electron microscopy / Caractérisation des impulsions électroniques pico et nanoseconde en microscopie électronique en transmission ultrarapideBücker, Kerstin 10 October 2017 (has links)
Cette thèse présente une étude des impulsions électroniques ultra-brèves en utilisant le nouveau microscope électronique en transmission ultrarapide (UTEM) à Strasbourg. La première partie porte sur le mode d’opération stroboscopique, basé sur l’utilisation d’un train d’impulsions d’électrons de l’ordre de la picoseconde pour l’étude des phénomènes réversibles ultrarapides. L’étude paramétrique effectuée a permis de révéler les dynamiques fondamentales des impulsions électroniques. Des mécanismes inconnus jusqu’alors et décisifs dans les caractéristiques des impulsions ont été dévoilés. Il s’agit des effets de trajectoire, qui limitent la résolution temporelle, et du filtrage chromatique, qui impacte la distribution en énergie et l’intensité du signal. Ces connaissances permettent aujourd’hui un paramétrage affiné de l’UTEM de manière à satisfaire les divers besoins expérimentaux. La deuxième partie concerne l’installation du mode d’opération complémentaire : le mode « singel-shot ». Ce mode fait appel à une impulsion unique d’intensité élevé et d’une durée de l’ordre de la nanoseconde pour l’étude des phénomènes irréversibles. L’UTEM de Strasbourg étant le premier instrument single-shot équipé d’un spectromètre de perte d’énergie des électrons (EELS), l’influence de l’aberration chromatique a pu été étudiée en détail. Elle s’est dévoilée être une limitation majeure pour la résolution en imagerie, nécessitant d’ajuster le bon compromis avec l’aberration sphérique d’une part et l’intensité du signal d’autre part. Enfin, la faisabilité de mener des études en EELS ultrarapide avec une seule impulsion nanoseconde a pu être démontrée, ceci constituant une première mondiale. Ce résultat très prometteur ouvre un tout nouveau domaine d’expériences résolu en temps. / This thesis presents a study of ultrashort electron pulses by using the new ultrafast transmission electron microscope (UTEM) in Strasbourg. The first part focuses on the stroboscopic operation mode which works with trains of picosecond multi-electron pulses in order to study ultrafast, reversible processes. A detailed parametric study was carried out, revealing fundamental principles of electron pulse dynamics. New mechanisms were unveiled which define the pulse characteristics. These are trajectory effects, limiting the temporal resolution, and chromatic filtering, which acts on the energy distribution and signal intensity. Guidelines can be given for optimum operation conditions adapted to different experimental requirements. The second part starts with the setup of the single-shot operation mode, based on intense nanosecond electron pulses for the investigation of irreversible processes. Having the first ns-UTEM equipped with an electron energy loss spectrometer, the influence of chromatic aberration was studied and found to be a major limitation in imaging. It has to be traded off with spherical aberration and signal intensity. For the first time, the feasibility of core-loss EELS with one unique ns-electron pulse is demonstrated. This opens a new field of time-resolved experiments.
|
89 |
FAZT: FEW AND ZERO-SHOT FRAMEWORK TO LEARN TEMPO-VISUAL EVENTS FROM LITTLE OR NO DATANaveen Madapana (11613925) 20 December 2021 (has links)
<div>Supervised classification methods based on deep learning have achieved great success in many domains and tasks that are previously unimaginable. Such approaches build on learning paradigms that require hundreds of examples in order to learn to classify objects or events. Thus, their immediate application to the domains with few or no observations is limited. This is because of the lack of ability to rapidly generalize to new categories from a few examples or from high-level descriptions of categories. This can be attributed to the significant gap between the way machines represent knowledge and the way humans represent categories in their minds and learn to recognize them. In this context, this research represents categories as semantic trees in a high-level attribute space and proposes an approach to utilize these representations to conduct N-Shot, Few-Shot, One-Shot, and Zero-Shot Learning (ZSL). This work refers to this paradigm as the problem of general classification (GCP) and proposes a unified framework for GCP referred to as the Few and Zero-Shot Technique (FAZT). FAZT framework is an end-to-end approach that uses trainable 3D convolutional neural networks and recurrent neural networks to simultaneously optimize for both the semantic and the classification tasks. Lastly, the problem of systematically obtaining semantic attributes by utilizing domain-specific ontologies is presented. The proposed framework is validated in the domains of hand gesture and action/activity recognition, however, this research can be applied to other domains such as video understanding, the study of human behavior, emotion recognition, etc. First, an attribute-based dataset for gestures is developed in a systematic manner by relying on literature in gestures and semantics, and crowdsourced platforms such as Amazon Mechanical Turk. To the best of our knowledge, this is the first ZSL dataset for hand gestures (ZSGL dataset). Next, our framework is evaluated in two experimental conditions: 1. Within-category (to test the attribute recognition power) and 2. Across-category (to test the ability to recognize an unknown category). In addition, we conducted experiments in zero-shot, one-shot, few-shot and continuous learning conditions in both open-set and closed-set scenarios. Results showed that our framework performs favorably on the ZSGL, Kinetics, UIUC Action, UCF101 and HMDB51 action datasets in all the experimental conditions.<br></div><div><br></div>
|
90 |
Berättarteknikernas kraft i filmskapande : - Long take och one shotArkenstedt, Elias, Calla Kjellin, Isabella January 2020 (has links)
Abstrakt Detta kandidatarbete tar upp berättartekniker som long take och one shot. Undersökningen handlar om att kombinera dessa två filmtekniker som inte används lika ofta i dagens filmer genom att skapa en film där handlingen och ljudets betydelse samverkar med teknikerna, som leder till en fängslande filmupplevelse. Vi vill använda dessa tekniker för visa och kritisera berättarteknikernas betydelse i film, att det blir en del av handlingen än bara ett objekt. För att driva handlingen framåt har vi studerat sjukdomen schizofreni för att skapa en berättelse. Vi utgick från Mats Ödeens manus metoder för att bygga en grund till gestaltningen och Natalie Ednell kunskap om schizofreni. Utförandet bestod av många tester av teknikerna och ljudet. En av testerna bestod av skådespelare på plats. Resultatet av undersökningen blev en filmupplevelse där alla beståndsdelar har en betydelse och ger ett annat berättarperspektiv angående schizofreni. Det finns betydligt fler berättartekniker men vi valde att fokusera på long take och one shot för att visa deras potential inom berättande. / Abstract This Bachelor thesis deals with storytelling techniques such as long take and one shot. The study is about combining these two film techniques that are not used as often in today's films by creating a film in which the action and the meaning of the sound interact with the techniques, which lead to a captivating film experience. We want to use these techniques to show and criticize the importance of storytelling techniques in film, that it becomes a part of the action rather than just an object. To drive the action forward, we have studied the disease schizophrenia to create a story. We used Mats Ödeen’s script methods to build a foundation for the design and Nathalie Ednell's knowledge of schizophrenia. The design consisted of many tests of the techniques and the sound. One of the tests had an actor on site. The result of the study was a cinematic experience where all the elements have a meaning and gives a different narrative perspective regarding schizophrenia. There are significantly more storytelling techniques, but we chose to focus on long take and one shot to show their potential in storytelling.
|
Page generated in 0.0379 seconds