• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 11
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 22
  • 22
  • 8
  • 7
  • 5
  • 5
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Video inpainting and semi-supervised object removal / Inpainting de vidéos et suppression d'objets semi-supervisée

Le, Thuc Trinh 06 June 2019 (has links)
De nos jours, l'augmentation rapide de les vidéos crée une demande massive d'applications d'édition de vidéos. Dans cette thèse, nous résolvons plusieurs problèmes relatifs au post-traitement vidéo. Nous nous concentrons sur l'application de suppression d'objets en vidéo. Pour mener à bien cette tâche, nous l'avons divisé en deux problèmes: (1) une étape de segmentation des objets vidéo pour sélectionner les objets à supprimer et (2) une étape d'inpainting vidéo pour remplir les zones endommagées. Pour le problème de la segmentation vidéo, nous concevons un système adapté aux applications de suppression d’objets avec différentes exigences en termes de précision et d’efficacité. Notre approche repose sur la combinaison de réseaux de neurones convolutifs (CNN) pour la segmentation et de la méthode classique de suivi des masks. Nous adoptons des réseaux de segmentation d’images et les appliquons à la casse vidéo en effectuant une segmentation image par image. En exploitant à la fois les formations en ligne et hors ligne avec uniquement une annotation de première image, les réseaux sont en mesure de produire une segmentation extrêmement précise des objets vidéo. En outre, nous proposons un module de suivi de masque pour assurer la continuité temporelle et un module de liaison de masque pour assurer la cohérence de l'identité entre les trames. De plus, nous présentons un moyen simple d’apprendre la couche de dilatation dans le masque, ce qui nous aide à créer des masques appropriés pour l’application de suppression d’objets vidéo.Pour le problème d’inpainting vidéo, nous divisons notre travail en deux catégories basées sur le type de fond. En particulier, nous présentons une méthode simple de propagation de pixels guidée par le mouvement pour traiter les cas d’arrière-plan statiques. Nous montrons que le problème de la suppression d'objets avec un arrière-plan statique peut être résolu efficacement en utilisant une technique simple basée sur le mouvement. Pour traiter le fond dynamique, nous introduisons la méthode d’inpainting vidéo en optimisant une fonction d’énergie globale basée sur des patchs. Pour augmenter la vitesse de l'algorithme, nous avons proposé une extension parallèle de l'algorithme 3D PatchMatch. Pour améliorer la précision, nous intégrons systématiquement le flux optique dans le processus global. Nous nous retrouvons avec une méthode d’inpainting vidéo capable de reconstruire des objets en mouvement ainsi que de reproduire des textures dynamiques tout en fonctionnant dans des délais raisonnables.Enfin, nous combinons les méthodes de segmentation des objets vidéo et d’inpainting vidéo dans un système unifié pour supprimer les objets non souhaités dans les vidéos. A notre connaissance, il s'agit du premier système de ce type. Dans notre système, l'utilisateur n'a qu'à délimiter approximativement dans le premier cadre les objets à modifier. Ce processus d'annotation est facilité par l'aide de superpixels. Ensuite, ces annotations sont affinées et propagées dans la vidéo par la méthode de segmentation des objets vidéo. Un ou plusieurs objets peuvent ensuite être supprimés automatiquement à l’aide de nos méthodes d’inpainting vidéo. Il en résulte un outil de montage vidéo informatique flexible, avec de nombreuses applications potentielles, allant de la suppression de la foule à la correction de scènes non physiques. / Nowadays, the rapid increase of video creates a massive demand for video-based editing applications. In this dissertation, we solve several problems relating to video post-processing and focus on objects removal application in video. To complete this task, we divided it into two problems: (1) A video objects segmentation step to select which objects to remove and (2) a video inpainting step to filling the damaged regions.For the video segmentation problem, we design a system which is suitable for object removal applications with different requirements in terms of accuracy and efficiency. Our approach relies on the combination of Convolutional Neural Networks (CNNs) for segmentation and the classical mask tracking method. In particular, we adopt the segmentation networks for image case and apply them to video case by performing frame-by-frame segmentation. By exploiting both offline and online training with first frame annotation only, the networks are able to produce highly accurate video object segmentation. Besides, we propose a mask tracking module to ensure temporal continuity and a mask linking module to ensure the identity coherence across frames. Moreover, we introduce a simple way to learn the dilation layer in the mask, which helps us create suitable masks for video objects removal application.For the video inpainting problem, we divide our work into two categories base on the type of background. In particular, we present a simple motion-guided pixel propagation method to deal with static background cases. We show that the problem of objects removal with a static background can be solved efficiently using a simple motion-based technique. To deal with dynamic background, we introduce video inpainting method by optimization a global patch-based energy function. To increase the speed of the algorithm, we proposed a parallel extension of the 3D PatchMatch algorithm. To improve accuracy, we systematically incorporate the optical flow in the overall process. We end up with a video inpainting method which is able to reconstruct moving objects as well as reproduce dynamic textures while running in a reasonable time.Finally, we combine the video objects segmentation and video inpainting methods into a unified system to removes undesired objects in videos. To the best of our knowledge, this is the first system of this kind. In our system, the user only needs to approximately delimit in the first frame the objects to be edited. These annotation process is facilitated by the help of superpixels. Then, these annotations are refined and propagated through the video by the video objects segmentation method. One or several objects can then be removed automatically using our video inpainting methods. This results in a flexible computational video editing tool, with numerous potential applications, ranging from crowd suppression to unphysical scenes correction.
12

Interface expandida - o vídeo na comunicação on-line

Borovik, Rogério Largman 10 October 2005 (has links)
Made available in DSpace on 2016-04-26T18:11:41Z (GMT). No. of bitstreams: 1 Dissertacao Rogerio Largman Borovik.pdf: 8080670 bytes, checksum: 0601fcb659326615ae7929ca37a8a806 (MD5) Previous issue date: 2005-10-10 / Abstract The new conditions provided nowadays by technological advancements have made it possible to develop different ways of videographic production. While granting the concentration of different human activities within a single individual, the processes which are inherent to the digital universe have contributed as completely new tools to communication and collaborative productions. The concept of authorship is being questioned, both in the collective making and in the use of recyclable material, in such a way that a cultural substrate considered banal, every-day stuff and disposable can be transformed into raw material for artistic creation. The main objectives of this study were to present a historical overview of digital video, studying its communication process, enhanced by the arising of the Internet, and to analyze the possibilities inscribed in the audiovisual authoring software Keyworx, that is original both in its conceptual proposal and in its shape, interface and action. Keyworx is not a product manufactured by a company, but rather an educational project of the Waag Society, a Dutch research institution. This software makes a fusion of multi-user teleconference environment aspects with those of digital media process softwares (popularly known as VJ software ). A more profound discussion about the digital video interface and its functioning showed that it is of the utmost importance to a reflection on the particularities of a collaborative video creation model on the net. In this sense, the theories of Edmond Couchot, Lev Manovich, Arlindo Machado, Jay David Bolter, Richard Grusin and Giselle Beiguelman have occupied a key position in this study, because they elucidate aspects of the binary nature of digital media and their implications in the authoring processes, along with interviews with the creators of Keyworx. We could observe that, ever since the beginnings of digital video on the Internet, the communication roles between the sender and the receptor had their definition broadened, converted into interactors of a dialogical communication, which implies changes in the way they are created and distributed in the media. / As novas condições dadas pelos avanços tecnológicos na contemporaneidade propiciaram diferentes maneiras de produção videográfica. Ao mesmo tempo em que garante a concentração de diversas atividades humanas em um único indivíduo, os processos inerentes ao universo digital contribuíram como ferramentas inéditas para a comunicação e produções colaborativas. O conceito de autoria é questionado tanto no fazer conjunto como no uso de material reciclável , de maneira que um substrato cultural considerado banal, cotidiano e descartável, pode ser transformado em matéria prima para a criação artística. Esse estudo tem como objetivo principal apresentar um panorama histórico sobre o vídeo digital, estudando suas funções na comunicação on line; assim como analisar as possibilidades inscritas no software de autoração audiovisual Keyworx, que possui uma originalidade tanto em sua proposta conceitual, quanto em sua forma, interface e ação. O Keyworx não é um produto confeccionado por uma empresa, mas sim um projeto educacional da Waag Society, instituição de pesquisa holandesa. O software faz uma fusão de aspectos de ambientes de multi-usuário de tele-conferência, com os de software processuais de mídia digital (popularmente conhecidos como software de VJ ). Uma discussão mais profunda sobre a interface do vídeo digital e seu funcionamento, mostrou ser de suma importância para a reflexão sobre as particularidades de um modelo de criação colaborativa em vídeo na rede. Nesse sentido, as teorias de Edmond Couchot, Lev Manovich, Arlindo Machado, Jay David Bolter, Richard Grusin e Giselle Beiguelman encontraram lugar de destaque em nosso estudo, pois elucidaram aspectos da natureza binária da mídia digital e suas implicações nos processos de autoria; assim como a realização de entrevistas com os criadores do Keyworx. Pudemos verificar que, a partir do vídeo digital na Internet, os papéis comunicativos entre o emissor e o receptor têm sua definição alargada, convertidos em interatores de uma comunicação dialógica, o que implica em mudanças em sua forma de criação e distribuição midiática.
13

影片動跡剪輯

王智潁, Wang, Chih-Ying Unknown Date (has links)
「動跡剪輯」是將多個不同內容的影片片段,根據影片中特定物體移動的關係,剪接成新的影片,使得產生的新影片能維持動作連貫及流暢的特性。本論文提出一套方法能夠自動找尋不同影片間相似的剪輯點,作為「動跡剪輯」的參考。此方法之重點在於建立影片的時空資訊,作為找尋剪輯點的依據。建立影片時空資訊的過程中,我們先將影片依偵測出的鏡頭轉換點分割成不同的影片片段,再將影片片段中前景物件的位置、大小與動作等資訊分離而成影片物件平面,並結合影片片段中的背景動作資訊與影片物件平面資訊,成為該影片片段之時空資訊,從而進行剪輯點之找尋與比對,擇其最佳點進行剪輯。 運用影片時空資訊於找尋影片間之剪輯點時,是以影片物件平面作為搜尋單位,此方式有助於提升結果的正確性,同時也提供了搜尋時的靈活度。 / With the rapid increasing of the multimedia applications in modern commercial and movie business, it becomes more desirable to have efficient video editing tools. However, conventional video editing requires too many manual interventions that reduce productivities as well as opportunities in better performance. In this thesis, we propose a MOtion-based Video Editing (MOVE) mechanism that can automatically select the most similar or suitable transition points from a given set of raw videos. A given video can be divided into a set of video clips using a shot detection algorithm. For each video clip, we provide an algorithm that can separate the global motions as well as the local motions using the principles of video object plane and accumulated difference. We introduce the concept of spatio-temporal information, a condensed information that associated with a video clip. We can use this information in finding a good video editing point. Since the spatio-temporal information is a concise representation of a video clip, searching in this domain will reduce the complexity of the problem and achieve better performance. We implemented our mechanism with successful experiments.
14

Some problems on temporally consistent video editing and object recognition

Sadek, Rida 07 December 2012 (has links)
Video editing and object recognition are two significant fields in computer vi- sion: the first has remarkably assisted digital production and post-production tasks of a digital video footage; the second is considered fundamental to image classification or image based search in large databases (e.g. the web). In this thesis, we address two problems, namely we present a novel formulation that tackles video editing tasks and we develop a mechanism that allows to generate more robust descriptors for objects in an image. Concerning the first problem, this thesis proposes two variational models to perform temporally coherent video editing. These models are applied to change an object’s (rigid or non-rigid) texture throughout a given video sequence. One model is based on propagating color information from a given frame (or be- tween two given frames) along the motion trajectories of the video; while the other is based on propagating gradient domain information. The models we present in this thesis require minimal user intervention and they automatically accommodate for illumination changes in the scene. Concerning the second problem, this thesis addresses the problem of affine invariance in object recognition. We introduce a way to generate geometric affine invariant quantities that are used in the construction of feature descrip- tors. We show that when these quantities are used they do indeed achieve a more robust recognition than the state of the art descriptors. i / La edición de vídeo y el reconocimiento de objetos son dos áreas fundamentales en el campo de la visión por computador: la primera es de gran utilidad en los procesos de producción y post-producción digital de vídeo; la segunda es esencial para la clasificación o búsqueda de imágenes en grandes bases de datos (por ejemplo, en la web). En esta tesis se acometen ambos problemas, en concreto, se presenta una nueva formulación que aborda las tareas de edición de vídeo y se desarrolla un mecanismo que permite generar descriptores más robustos para los objetos de la imagen. Con respecto al primer problema, en esta tesis se proponen dos modelos variacionales para llevar a cabo la edición de vídeo de forma coherente en el tiempo. Estos modelos se aplican para cambiar la textura de un objeto (rígido o no) a lo largo de una secuencia de vídeo dada. Uno de los modelos está basado en la propagación de la información de color desde un determinado cuadro de la secuencia de vídeo (o entre dos cuadros dados) a lo largo de las trayectorias de movimiento del vídeo. El otro modelo está basado en la propagación de la información en el dominio del gradiente. Ambos modelos requieren una intervención mínima por parte del usuario y se ajustan de manera automática a los cambios de iluminación de la escena. Con respecto al segundo problema, esta tesis aborda el problema de la invariancia afín en el reconocimiento de objetos. Se introduce un nuevo método para generar cantidades geométricas afines que se utilizan en la generación de descriptores de características. También se demuestra que el uso de dichas cantidades proporciona mayor robustez al reconocimiento que los descriptores existentes actualmente en el estado del arte.
15

The iterative frame : algorithmic video editing, participant observation & the black box

Rapoport, Robert S. January 2016 (has links)
Machine learning is increasingly involved in both our production and consumption of video. One symptom of this is the appearance of automated video editing applications. As this technology spreads rapidly to consumers, the need for substantive research about its social impact grows. To this end, this project maintains a focus on video editing as a microcosm of larger shifts in cultural objects co-authored by artificial intelligence. The window in which this research occurred (2010-2015) saw machine learning move increasingly into the public eye, and with it ethical concerns. What follows is, on the most abstract level, a discussion of why these ethical concerns are particularly urgent in the realm of the moving image. Algorithmic editing consists of software instructions to automate the creation of timelines of moving images. The criteria that this software uses to query a database is variable. Algorithmic authorship already exists in other media, but I will argue that the moving image is a separate case insofar as the raw material of text and music software can develop on its own. The performance of a trained actor can still not be generated by software. Thus, my focus is on the relationship between live embodied performance, and the subsequent algorithmic editing of that footage. This is a process that can employ other software like computer vision (to analyze the content of video) and predictive analytics (to guess what kind of automated film to make for a given user). How is performance altered when it has to communicate to human and non-human alike? The ritual of the iterative frame gives literal form to something that throughout human history has been a projection: the omniscient participant observer, more commonly known as the Divine. We experience black boxed software (AI's, specifically neural networks, which are intrinsically opaque) as functionally omniscient and tacitly allow it to edit more and more of life (e.g. filtering articles, playlists and even potential spouses). As long as it remains disembodied, we will continue to project the Divine on to the black box, causing cultural anxiety. In other words, predictive analytics alienate us from the source code of our cultural texts. The iterative frame then is a space in which these forces can be inscribed on the body, and hence narrated. The algorithmic editing of content is already taken for granted. The editing of moving images, in contrast, still requires a human hand. We need to understand the social power of moving image editing before it is delegated to automation. Practice Section: This project is practice-led, meaning that the portfolio of work was produced as it was being theorized. To underscore this, the portfolio comes at the end of the document. Video editors use artificial intelligence (AI) in a number of different applications, from deciding the sequencing of timelines to using facial and language detection to find actors in archives. This changes traditional production workflows on a number of levels. How can the single decision cut a between two frames of video speak to the larger epistemological shifts brought on by predictive analytics and Big Data (upon which they rely)? When predictive analytics begin modeling the world of moving images, how will our own understanding of the world change? In the practice-based section of this thesis, I explore how these shifts will change the way in which actors might approach performance. What does a gesture mean to AI and how will the editor decontextualize it? The set of a video shoot that will employ an element of AI in editing represents a move towards ritualization of production, summarized in the term the 'iterative frame'. The portfolio contains eight works that treat the set was taken as a microcosm of larger shifts in the production of culture. There is, I argue, metaphorical significance in the changing understanding of terms like 'continuity' and 'sync' on the AI-watched set. Theory Section In the theoretical section, the approach is broadly comparative. I contextualize the current dynamic by looking at previous shifts in technology that changed the relationship between production and post-production, notably the lightweight recording technology of the 1960s. This section also draws on debates in ethnographic filmmaking about the matching of film and ritual. In this body of literature, there is a focus on how participant observation can be formalized in film. Triangulating between event, participant observer and edit grammar in ethnographic filmmaking provides a useful analogy in understanding how AI as film editor might function in relation to contemporary production. Rituals occur in a frame that is dependent on a spatially/temporally separate observer. This dynamic also exists on sets bound for post-production involving AI, The convergence of film grammar and ritual grammar occurred in the 1960s under the banner of cinéma vérité in which the relationship between participant observer/ethnographer and the subject became most transparent. In Rouch and Morin's Chronicle of a Summer (1961), reflexivity became ritualized in the form of on-screen feedback sessions. The edit became transparent-the black box of cinema disappeared. Today as artificial intelligence enters the film production process this relationship begins to reverse-feedback, while it exists, becomes less transparent. The weight of the feedback ritual gets gradually shifted from presence and production to montage and post-production. Put differently, in cinéma vérité, the participant observer was most present in the frame. As participant observation gradually becomes shared with code it becomes more difficult to give it an embodied representation and thus its presence is felt more in the edit of the film. The relationship between the ritual actor and the participant observer (the algorithm) is completely mediated by the edit, a reassertion of the black box, where once it had been transparent. The crucible for looking at the relationship between algorithmic editing, participant observation and the black box is the subject in trance. In ritual trance the individual is subsumed by collective codes. Long before the advent of automated editing trance was an epistemological problem posed to film editing. In the iterative frame, for the first time, film grammar can echo ritual grammar and indeed become continuous with it. This occurs through removing the act of cutting from the causal world, and projecting this logic of post-production onto performance. Why does this occur? Ritual and specifically ritual trance is the moment when a culture gives embodied form to what it could not otherwise articulate. The trance of predictive analytics-the AI that increasingly choreographs our relationship to information-is the ineffable that finds form in the iterative frame. In the iterative frame a gesture never exists in a single instance, but in a potential state. The performers in this frame begin to understand themselves in terms of how automated indexing processes reconfigure their performance. To the extent that gestures are complicit with this mode of databasing they can be seen as votive toward the algorithmic. The practice section focuses on the poetics of this position. Chapter One focuses on cinéma vérité as a moment in which the relationship between production and post-production shifted as a function of more agile recording technology, allowing the participant observer to enter the frame. This shift becomes a lens to look at changes that AI might bring. Chapter Two treats the work of Pierre Huyghe as a 'liminal phase' in which a new relationship between production and post-production is explored. Finally, Chapter Three looks at a film in which actors perform with awareness that footage will be processed by an algorithmic edit. / The conclusion looks at the implications this way of relating to AI-especially commercial AI-through embodied performance could foster a more critical relationship to the proliferating black-boxed modes of production.
16

Méthodes variationnelles pour la colorisation d’images, de vidéos, et la correction des couleurs / Variational methods for image and video colorization and color correction

Pierre, Fabien 23 November 2016 (has links)
Cette thèse traite de problèmes liés à la couleur. En particulier, on s’intéresse à des problématiques communes à la colorisation d’images, de vidéos et au rehaussement de contraste. Si on considère qu’une image est composée de deux informations complémentaires, une achromatique (sans couleur) et l’autre chromatique (en couleur), les applications étudiées consistent à traiter une de ces deux informations en préservant sa complémentaire. En colorisation, la difficulté est de calculer une image couleur en imposant son niveau de gris. Le rehaussement de contraste vise à modifier l’intensité d’une image en préservant sa teinte. Ces problématiques communes nous ont conduits à étudier formellement la géométrie de l’espace RGB. On a démontré que les espaces couleur classiques de la littérature pour résoudre ces types de problème conduisent à des erreurs. Un algorithme, appelé spécification luminance-teinte, qui calcule une couleur ayant une teinte et une luminance données est décrit dans cette thèse. L’extension de cette méthode à un cadre variationnel a été proposée. Ce modèle a été utilisé avec succès pour rehausser les images couleur, en utilisant des hypothèses connues sur le système visuel humain. Les méthodes de l’état-de-l’art pour la colorisation d’images se divisent en deux catégories. La première catégorie regroupe celles qui diffusent des points de couleurs posés par l’utilisateur pour obtenir une image colorisée (colorisation manuelle). La seconde est constituée de celles qui utilisent une image couleur de référence ou une base d’images couleur et transfèrent les couleurs de la référence sur l’image en niveaux de gris (colorisation basée exemple). Les deux types de méthodes ont leurs avantages et inconvénients. Dans cette thèse, on propose un modèle variationnel pour la colorisation basée exemple. Celui-ci est étendu en une méthode unifiant la colorisation manuelle et basée exemple. Enfin, nous décrivons des modèles variationnels qui colorisent des vidéos tout en permettent une interaction avec l’utilisateur. / This thesis deals with problems related to color. In particular, we are interested inproblems which arise in image and video colorization and contrast enhancement. When considering color images composed of two complementary information, oneachromatic (without color) and the other chromatic (in color), the applications studied in this thesis are based on the processing one of these information while preserving its complement. In colorization, the challenge is to compute a color image while constraining its gray-scale channel. Contrast enhancement aims to modify the intensity channel of an image while preserving its hue.These joined problems require to formally study the RGB space geometry. In this work, it has been shown that the classical color spaces of the literature designed to solve these classes of problems lead to errors. An novel algorithm, called luminance-hue specification, which computes a color with a given hue and luminance is described in this thesis. The extension of this method to a variational framework has been proposed. This model has been used successfully to enhance color images, using well-known assumptions about the human visual system. The state-of-the-art methods for image colorization fall into two categories. The first category includes those that diffuse color scribbles drawn by the user (manual colorization). The second consists of those that benefits from a reference color image or a base of reference images to transfer the colors from the reference to the grayscale image (exemplar-based colorization). Both approach have their advantages and drawbacks. In this thesis, we design a variational model for exemplar-based colorization which is extended to a method unifying the manual colorization and the exemplar-based one. Finally, we describe two variational models to colorize videos in interaction with the user.
17

YouTube como herramienta complementaria en cursos universitarios de edición y postproducción audiovisual / YouTube tutorials as a complementary tool in university courses of audiovisual editing and post-production

De Olarte Ramírez, Humberto Jorge 15 December 2020 (has links)
La presente investigación busca dar a conocer cómo funcionan los tutoriales de YouTube al usarse como herramienta complementaria en el proceso de aprendizaje de los cursos universitarios de edición y postproducción audiovisual, pues, en los últimos años, tanto docentes como estudiantes, fueron integrando, progresivamente, el uso de plataformas digitales en la educación superior. Para cursos como edición y postproducción cinematográfica, YouTube fue utilizado para reforzar el aprendizaje o resolver dudas. Para este estudio, se usaron video tutoriales de los canales de YouTube de RunbenGuo, Yoney Gallardo y Nuvaproductions. El objetivo principal fue conocer cómo los tutoriales de YouTube funcionan al momento de usarse como herramienta complementaria en el proceso de aprendizaje de cursos universitarios de edición y postproducción. Para esta investigación, se desarrolló una metodología cualitativa de diseño fenomenológico. Para este tipo de estudio, se realizó análisis de contenido de video tutoriales de los canales ya mencionados y entrevistas a docentes, un experto y estudiantes de la carrera profesional de comunicación audiovisual y medios interactivos de la Universidad Peruana de Ciencias Aplicadas. Gracias al análisis desarrollado, se puede concluir en que los tutoriales de YouTube funcionan como una herramienta de ayuda para mejorar y amplificar el proceso de aprendizaje universitario del estudiante, ya sea por necesidad de pasar el curso con una nota mínima aprobatoria o por formarse como profesional. Asimismo, en el curso de postproducción, este hecho es más evidente y depende de la motivación por aprender, así como también del uso de tutoriales adecuados. / This research seeks to show how YouTube tutorials work when used as a complementary tool in the learning process of university audiovisual editing and post-production courses, since, in recent years, both teachers and students have progressively integrated, the use of digital platforms in higher education. For courses such as film editing and post-production, YouTube was used to reinforce learning or answer questions. For this study, video tutorials from the YouTube channels of RunbenGuo, Yoney Gallardo and Nuvaproductions were used. The main objective was to know how YouTube tutorials work when used as a complementary tool in the learning process of university editing and post-production courses. For this research, a qualitative phenomenological design methodology was developed. For this type of study, content analysis of video tutorials of the aforementioned channels and interviews with teachers, an expert and students of the professional career of audiovisual communication and interactive media of the Peruvian University of Applied Sciences were carried out. Thanks to the analysis developed, it can be concluded that the YouTube tutorials work as a help tool to improve and amplify the student's university learning process, either due to the need to pass the course with a minimum passing grade or to train as a professional. Also, in the post-production course, this fact is more evident and depends on the motivation to learn, as well as the use of appropriate tutorials. / Tesis
18

Design of Video Editing Interface for Collaboration / Design av Videoredigeringsgränssnitt för Samarbete

Tholsby, Ellen January 2023 (has links)
Video editing is a process where multiple roles are required to collaborate. Despite this, the design of video editing software does not easily support collaboration. Hence, this study investigates how the video editing workflow can be improved by designing a user interface that supports collaboration. This is achieved with focus on simultaneous work performed in a remote workspace. Firstly, a design workshop was conducted with professional video editors. Secondly, user interface mockups were created. Lastly, the mockups were evaluated in a mockup test and subsequently updated based on the feedback. The study proposes five design concepts that collectively aim to support collaboration through workspace awareness by focusing on enhancing communication to make the workflow more time efficient. The evaluation of the design concepts indicates their potential to facilitate the collaborators collective work and communication. Therefore, the results suggest that the workflow has the possibility to be improved by including collaborative features in video editing software. / Videoredigering är en process där flera roller behöver samarbeta. Trots detta stöder designen av videoredigeringsprogram inte samarbete. Därför undersöker denna studie hur arbetsflödet för videoredigering kan förbättras genom att utforma ett användargränssnitt som stöder samarbete. Detta uppnås med fokus på arbete som utförs samtidigt på distans. Först genomfördes en designworkshop med professionella videoredigerare. Sedan skapades mockups för användargränssnittet. Slutligen utvärderades mockuperna i en mockup-testning och uppdaterades därefter baserat på feedbacken. Studien föreslår fem designkoncept som tillsammans syftar till att stödja samarbete genom arbetsplatsmedvetenhet genom att fokusera på att förbättra kommunikationen för att göra arbetsflödet mer tidseffektivt. Utvärderingen av designkoncepten indikerar deras potential att underlätta medarbetarnas kollektiva arbete och kommunikation. Därför tyder resultaten på att arbetsflödet har möjlighet att förbättras genom att inkludera samarbetsfunktioner i videoredigeringsprogram.
19

Plataforma virtual de edición de video - toma 4 / Virtual video editing platform - toma 4

Linares Villafuerte, Nelly Luzmila Joselyn, Llontop Baldera, Karina Stephany, Orozco Laoyza, Carolina Victoria, Rodriguez Núñez, Maria Jose 05 July 2021 (has links)
Esta investigación, cuya finalidad es diseñar una plataforma virtual en la que las personas puedan tener acceso a un editor de videos amigable por las herramientas que posee y fácil de utilizar; así como también, tener la posibilidad de subir sus propias creaciones a la plataforma para poder compartirlo con sus amigos o contactos, se presenta como un proyecto factible que si puede ser rentable en el tiempo y soluciona los problemas de jóvenes y emprendedores que aún no dominan el manejo de las herramientas en las plataformas de edición de video, es por ello que les proponemos como solución un editor de videos con herramientas básicas, video tutoriales cortos explicando las funciones de las herramientas, plantillas de video para una mejor edición, formatos dependiendo la red social a la que va dirigido el video, entre otras opciones, con el objetivo que el usuario se familiarice con la plataforma. La solución a este problema fue validada mediante entrevistas tanto a posibles usuarios como a expertos en el tema, tomando en cuenta las opiniones y recomendaciones que van a permitir realizar modificaciones y así llegar a desarrollar una página y prototipo de editor de video acorde a lo que nuestro segmento de clientes está buscando. Del mismo modo, se realizaron publicaciones por medio de las redes sociales como Instagram y Facebook para que nuestro proyecto llegue a más usuarios y así tener una comunicación más directa con ellos. Asimismo, se realizaron los cálculos financieros a 3 años de nuestro proyecto tomando en cuenta las simulaciones de órdenes de compra en las últimas semanas y la proyección de nuestras ventas según las inversiones en marketing que se realizarán cada cierto tiempo. De igual manera, los financiamientos a los cuales vamos a acceder por parte del banco, de los accionistas y de un financiamiento no tradicional; finalmente, mencionar nuestros ratios e indicadores financieros que sustentan la viabilidad de TOMA4. / This research, whose purpose is to design a virtual platform in where people can have access to a friendly video editor due to the tools it has and easy to use; as well as having the possibility of uploading your own creations to the platform and to share it with your friends or contacts, it is presented as a feasible project that can be profitable over time and solves the problems of young people and entrepreneurs who have not yet mastered the management of tools in video editing platforms, that is why we propose a video editor with basic tools as a solution, short video tutorials explaining the functions of the tools, video templates for better editing, formats dependent on the network social which the video is directed, among other options, with the aim that the user becomes familiar with the platform. The solution to this problem was validated through interviews with both possible users and experts on the subject, taking into account the opinions and recommendations that will allow modifications, in that way develop a page and video editor prototype according to what our customer segment is looking for. In the same way, publications were made through social networks such as Instagram and Facebook so that our project reaches more users and to have more direct communication with them. Likewise, the financial calculations for 3 years of our project were carried out taking into account the simulations of purchase orders in recent weeks and the projection of our sales according to the marketing investments that will be made from time to time. In the same way, the financing to which we are going to access from the bank, the shareholders and non-traditional financing; finally, mention our financial ratios and indicators that support the viability of TOMA4. / Trabajo de investigación
20

TEXTILE - Augmenting Text in Virtual Space

Hansen, Simon January 2016 (has links)
Three-dimensional literature is a virtually non-existent or in any case very rare and emergent digital art form, defined by the author as a unit of text, which is not confined to the two-dimensional layout of print literature, but instead mediated across all three axes of a virtual space. In collaboration with two artists the author explores through a bodystorming workshop how writers and readers could create and experience three-dimensional literature in mixed reality, by using mobile devices that are equipped with motion sensors, which enable users to perform embodied interactions as an integral part of the literary experience.For documenting the workshop, the author used body-mounted action cameras in order to record the point-of-view of the participants. This choice turned out to generate promising knowledge on using point-of-view footage as an integral part of the methodological approach. The author has found that by engaging creatively with such footage, the designer gains a profound understanding and vivid memory of complex design activities.As the outcome the various design activities, the author developed a concept for an app called TEXTILE. It enables users to build three-dimensional texts by positioning words in a virtual bubble of space around the user and to share them, either on an online platform or at site-specific places. A key finding of this thesis is that the creation of three-dimensional literature on a platform such as TEXTILE is not just an act of writing – it is an act of sculpture and an act of social performance.

Page generated in 0.1933 seconds