1 |
Six English novels adapted for the cinemaStrong, Richard Jeremy January 1999 (has links)
This study examines the film adaptations of six English novels; Sense and Sensibility, Emma, Tess, Jude, A Room with a View and A Passage to India. Through textual analysis of both the films and the original novels it demonstrates that many of the changes which occur in the transition between media are explicable in terms of differences between film and literary genres. Most previous writing on adaptation has tended to explain such changes as a consequence of film and literature having different signifying or expressive capacities. Whilst this study does not argue that literary styles and devices have necessary or inevitable equivalents in film form, it does propose that filmmakers can find satisfying and comprehensible correlatives for written idioms, and that differences between novels and their adaptations are not therefore always best understood as arising from failures in the mechanics of translation. In its consideration of what each film alters and omits this study finds compelling evidence that they are reshaped in particularly genre-related ways. This takes the form both of alterations that place an adaptation more comfortably in a particular fihn genre than the original story materials might allow, and changes which diminish or elide the operation of a literary genre to which the original novel belongs or relates. Sense and Sensibility, Emma and A Room with a View are discussed in terms of how they become romantic comedies, while the Hardy adaptations are the occasion of most of the original melodrama being omitted. Other genres and modes which pose problems and questions in adaptation - including tragedy, the didactic and the modern - are also examined. Additionally, this study will consider the political contexts and conditions of production of the novels and their adaptations as well as examining the extent to which the films may be said to be authored.
|
2 |
Optimal Video Adaptation For Resource Constrained Mobile Devices Based On Utility TheoryOnur, Ozgur Deniz 01 January 2003 (has links) (PDF)
This thesis proposes a novel system to determine the best representation of a
video in the sense that, a user watching the video reaches the highest level of
satisfaction possible, given the resource capabilities of the viewing device. Utility
theory is used to obtain a utility function representing the user satisfaction as a
function of video coding parameters, and the viewing device capabilities. The
utility function is formulated as the weighted sum of three individual components.
These components are chosen such that, the satisfaction on any one of the
components is independent of the satisfaction on every other component. The
advantage of such decomposition is the ability to express individual components
as simple mathematical relations, modeling user satisfaction. Afterwards, the
unknown parameters of these models are determined by results of subjective tests,
performed by a multitude of users. Finally, simulated annealing is utilized to find the global optimum of this utility function representing the user satisfaction.
Simulation results based on subjective viewing tests on a resource limited mobile
device indicate a consistent user satisfaction by the determined optimal encoding
parameters of the video.
|
3 |
Visual Attention-based Small Screen Adaptation for H.264 VideosMukherjee, Abir January 2008 (has links)
We develop a framework that uses visual attention analysis combined with temporal
coherence to detect the attended region from a H.264 video bitstream, and display it on
a small screen. A visual attention module based upon Walther and Koch's model gives us
the attended region in I-frames. We propose a temporal coherence matching framework that
uses the motion information in P-frames to extend the attended region over the H.264
video sequence. Evaluations show encouraging results with over 80% successful detection rate for objects of interest, and 85% respondents claiming satisfactory output.
|
4 |
Visual Attention-based Small Screen Adaptation for H.264 VideosMukherjee, Abir January 2008 (has links)
We develop a framework that uses visual attention analysis combined with temporal
coherence to detect the attended region from a H.264 video bitstream, and display it on
a small screen. A visual attention module based upon Walther and Koch's model gives us
the attended region in I-frames. We propose a temporal coherence matching framework that
uses the motion information in P-frames to extend the attended region over the H.264
video sequence. Evaluations show encouraging results with over 80% successful detection rate for objects of interest, and 85% respondents claiming satisfactory output.
|
5 |
Adaptação de vídeo ao vivo apoiada em informações de contexto / Live video adaptation based on context informationManzato, Marcelo Garcia 22 September 2006 (has links)
O trabalho apresentado nesta dissertação trata do desenvolvimento de um mecanismo para adaptação automática de ví?deo MPEG-4 ao vivo, de modo a atender as necessidades ou capacidades atuais de usuários e do sistema. Um dos desafios dessa área é capturar e representar as informações necessárias para realizar a adaptação. Assim, utilizando técnicas da área de computação ciente de contexto, foi desenvolvido um modelo extensível para representação de dispositivos. Também foram desenvolvidos métodos automáticos e semi-automáticos para capturar as informações necessárias. Neste trabalho foi adotado o modelo de recodificação de vídeo, o qual pode gerar atrasos que inviabilizam a adaptação de vídeo ao vivo em aplicações interativas. Assim, este trabalho realizou uma avaliação do impacto causado pela recodificação no atraso total, fim-a-fim, percebido pelo usuário. / This work presents the development of a mechanism to automatically adapt MPEG-4 live video, in a way to response the actual necessities or capacities of users or systems. One of the challanges in this area is to capture and represent the information needed to adapting content. Thus, using context aware computing techniques, an extensible model has been developed, which can be used to represent devices. It has also been developed automatic and semi-automatic methods to capture the needed information. In this work, the transcoding model has been adopted, which may generate latency, making difficult to use transcoding with interactive applications. In this way, this work has evaluated the impact caused by the transcoding when compared to the end-to-end total delay perceived by the user.
|
6 |
Adaptação de vídeo ao vivo apoiada em informações de contexto / Live video adaptation based on context informationMarcelo Garcia Manzato 22 September 2006 (has links)
O trabalho apresentado nesta dissertação trata do desenvolvimento de um mecanismo para adaptação automática de ví?deo MPEG-4 ao vivo, de modo a atender as necessidades ou capacidades atuais de usuários e do sistema. Um dos desafios dessa área é capturar e representar as informações necessárias para realizar a adaptação. Assim, utilizando técnicas da área de computação ciente de contexto, foi desenvolvido um modelo extensível para representação de dispositivos. Também foram desenvolvidos métodos automáticos e semi-automáticos para capturar as informações necessárias. Neste trabalho foi adotado o modelo de recodificação de vídeo, o qual pode gerar atrasos que inviabilizam a adaptação de vídeo ao vivo em aplicações interativas. Assim, este trabalho realizou uma avaliação do impacto causado pela recodificação no atraso total, fim-a-fim, percebido pelo usuário. / This work presents the development of a mechanism to automatically adapt MPEG-4 live video, in a way to response the actual necessities or capacities of users or systems. One of the challanges in this area is to capture and represent the information needed to adapting content. Thus, using context aware computing techniques, an extensible model has been developed, which can be used to represent devices. It has also been developed automatic and semi-automatic methods to capture the needed information. In this work, the transcoding model has been adopted, which may generate latency, making difficult to use transcoding with interactive applications. In this way, this work has evaluated the impact caused by the transcoding when compared to the end-to-end total delay perceived by the user.
|
7 |
Adaptação de stream de vídeo em veículos aéreos não tripulados / Video stream adaptation on unmanned aerial vehiclesThiago Henrique Martinelli 24 September 2012 (has links)
Veículos Aéreos não tripulados (VANTs) vêm sendo cada vez mais utilizados em diversos países, tanto na área militar como na civil. O cenário considerado nesse estudo é o de um VANT realizando captura de vídeo em tempo real, transmitindo-o a uma base terrestre por meio de rede sem fio. O problema consiste no fato de não ser possível garantir uma taxa de transmissão contínua, com banda estável. Isso ocorre devido a fatores como a velocidade da aeronave (da ordem centenas de km/h), irregularidades de terreno (impedindo a linha de visada do enlace de transmissão), ou do clima, como tempestades que podem interferir na transmissão da RF. Por fim, os movimentos que o VANT pode realizar no vôo (Rolagem, Arfagem ou Guinada) podem prejudicar a disponibilidade do link. Dessa forma, é necessário que seja realizada adaptação de vídeo de acordo com a banda disponível. Assim, quando a qualidade do enlace for degradada, deverá ser realizada uma redução no tamanho do vídeo, evitando a interrupção na transmissão. Por outro lado, a adaptação também deverá fazer com que a banda disponível seja utilizada, evitando o envio de vídeos com qualidade inferior à que seria possível para determinado valor de largura de banda. Nesse trabalho será considerada a faixa de valores de largura de banda de 8 Mbps até zero. Para realizar a adaptação será utilizado o padrão H.264/AVC com codificação escalável / Unmanned Aerial Vehicles (UAVs) are being increasingly used in several countries, both in the military and civilian areas. In this study we consider an UAV equipped with a camera, capturing video for a real-time transmission to a ground-base using wireless network. The problem is that its not possible to ensure a continuous transmission rate, with stable bandwidth. That occurs due to factors like the speed of the aircraft, irregularities of terrain, or the weather (as storms, heat and fog, for instance, can interfere with RF transmission). Finally, the movements that the UAV can perform in flight (Roll, pitch and yaw) can impair link availability. Thus, it is necessary to perform an adaptation of video according to the available bandwidth. When the link quality is degraded, a reduction in the resolution of the video must be performed , avoiding interruption of the transmission. Additionally, adaptation must also provide that all the available bandwidth is used, avoiding sending the video with lower quality that would be possible for a given value bandwidth. In this work we propose a system which can vary the total amount of data being transmitted, by adjusting the compression parameters of the video. We manage to produce a system which uses the range from 8 Mbps up to zero. We use the H.264/AVC Codec, with scalable video coding
|
8 |
Adaptação de stream de vídeo em veículos aéreos não tripulados / Video stream adaptation on unmanned aerial vehiclesMartinelli, Thiago Henrique 24 September 2012 (has links)
Veículos Aéreos não tripulados (VANTs) vêm sendo cada vez mais utilizados em diversos países, tanto na área militar como na civil. O cenário considerado nesse estudo é o de um VANT realizando captura de vídeo em tempo real, transmitindo-o a uma base terrestre por meio de rede sem fio. O problema consiste no fato de não ser possível garantir uma taxa de transmissão contínua, com banda estável. Isso ocorre devido a fatores como a velocidade da aeronave (da ordem centenas de km/h), irregularidades de terreno (impedindo a linha de visada do enlace de transmissão), ou do clima, como tempestades que podem interferir na transmissão da RF. Por fim, os movimentos que o VANT pode realizar no vôo (Rolagem, Arfagem ou Guinada) podem prejudicar a disponibilidade do link. Dessa forma, é necessário que seja realizada adaptação de vídeo de acordo com a banda disponível. Assim, quando a qualidade do enlace for degradada, deverá ser realizada uma redução no tamanho do vídeo, evitando a interrupção na transmissão. Por outro lado, a adaptação também deverá fazer com que a banda disponível seja utilizada, evitando o envio de vídeos com qualidade inferior à que seria possível para determinado valor de largura de banda. Nesse trabalho será considerada a faixa de valores de largura de banda de 8 Mbps até zero. Para realizar a adaptação será utilizado o padrão H.264/AVC com codificação escalável / Unmanned Aerial Vehicles (UAVs) are being increasingly used in several countries, both in the military and civilian areas. In this study we consider an UAV equipped with a camera, capturing video for a real-time transmission to a ground-base using wireless network. The problem is that its not possible to ensure a continuous transmission rate, with stable bandwidth. That occurs due to factors like the speed of the aircraft, irregularities of terrain, or the weather (as storms, heat and fog, for instance, can interfere with RF transmission). Finally, the movements that the UAV can perform in flight (Roll, pitch and yaw) can impair link availability. Thus, it is necessary to perform an adaptation of video according to the available bandwidth. When the link quality is degraded, a reduction in the resolution of the video must be performed , avoiding interruption of the transmission. Additionally, adaptation must also provide that all the available bandwidth is used, avoiding sending the video with lower quality that would be possible for a given value bandwidth. In this work we propose a system which can vary the total amount of data being transmitted, by adjusting the compression parameters of the video. We manage to produce a system which uses the range from 8 Mbps up to zero. We use the H.264/AVC Codec, with scalable video coding
|
9 |
Etude et mise en place d’une plateforme d’adaptation multiservice embarquée pour la gestion de flux multimédia à différents niveaux logiciels et matériels / Conception and implementation of an hardware accelerated video adaptation platform in a home network contextAubry, Willy 19 December 2012 (has links)
Les avancées technologiques ont permis la commercialisation à grande échelle de terminaux mobiles. De ce fait, l’homme est de plus en plus connecté et partout. Ce nombre grandissant d’usagers du réseau ainsi que la forte croissance du contenu disponible, aussi bien d’un point de vue quantitatif que qualitatif saturent les réseaux et l’augmentation des moyens matériels (passage à la fibre optique) ne suffisent pas. Pour surmonter cela, les réseaux doivent prendre en compte le type de contenu (texte, vidéo, ...) ainsi que le contexte d’utilisation (état du réseau, capacité du terminal, ...) pour assurer une qualité d’expérience optimum. A ce sujet, la vidéo fait partie des contenus les plus critiques. Ce type de contenu est non seulement de plus en plus consommé par les utilisateurs mais est aussi l’un des plus contraignant en terme de ressources nécéssaires à sa distribution (taille serveur, bande passante, …). Adapter un contenu vidéo en fonction de l’état du réseau (ajuster son débit binaire à la bande passante) ou des capacités du terminal (s’assurer que le codec soit nativement supporté) est indispensable. Néanmoins, l’adaptation vidéo est un processus qui nécéssite beaucoup de ressources. Cela est antinomique à son utilisation à grande echelle dans les appareils à bas coûts qui constituent aujourd’hui une grande part dans l’ossature du réseau Internet. Cette thèse se concentre sur la conception d’un système d’adaptation vidéo à bas coût et temps réel qui prendrait place dans ces réseaux du futur. Après une analyse du contexte, un système d’adaptation générique est proposé et évalué en comparaison de l’état de l’art. Ce système est implémenté sur un FPGA afin d’assurer les performances (temps-réels) et la nécessité d’une solution à bas coût. Enfin, une étude sur les effets indirects de l’adaptation vidéo est menée. / On the one hand, technology advances have led to the expansion of the handheld devices market. Thanks to this expansion, people are more and more connected and more and more data are exchanged over the Internet. On the other hand, this huge amound of data imposes drastic constrains in order to achieve sufficient quality. The Internet is now showing its limits to assure such quality. To answer nowadays limitations, a next generation Internet is envisioned. This new network takes into account the content nature (video, audio, ...) and the context (network state, terminal capabilities ...) to better manage its own resources. To this extend, video manipulation is one of the key concept that is highlighted in this arising context. Video content is more and more consumed and at the same time requires more and more resources. Adapting videos to the network state (reducing its bitrate to match available bandwidth) or to the terminal capabilities (screen size, supported codecs, …) appears mandatory and is foreseen to take place in real time in networking devices such as home gateways. However, video adaptation is a resource intensive task and must be implemented using hardware accelerators to meet the desired low cost and real time constraints.In this thesis, content- and context-awareness is first analyzed to be considered at the network side. Secondly, a generic low cost video adaptation system is proposed and compared to existing solutions as a trade-off between system complexity and quality. Then, hardware conception is tackled as this system is implemented in an FPGA based architecture. Finally, this system is used to evaluate the indirect effects of video adaptation; energy consumption reduction is achieved at the terminal side by reducing video characteristics thus permitting an increased user experience for End-Users.
|
10 |
A Complexity-utility Framework For Optimizing Quality Ofexperience For Visual Content In Mobile DevicesOnur, Ozgur Deniz 01 February 2012 (has links) (PDF)
Subjective video quality and video decoding complexity are jointly optimized in order to determine the video encoding parameters that will result in the best Quality of Experience (QoE) for an end user watching a video clip on a mobile device. Subjective video quality is estimated by an objective criteria, video quality metric (VQM), and a method for predicting the video quality of a test sequence from the available training sequences with similar content characteristics is presented. Standardized spatial index and temporal index metrics are utilized in order to measure content similarity. A statistical approach for modeling decoding complexity on a hardware platform using content features extracted from video clips is presented. The overall decoding complexity is modeled as the sum of component complexities that are associated with the computation intensive code blocks present in state-of-the-art hybrid video decoders. The content features and decoding complexities are modeled as random parameters and their joint probability density function is predicted as Gaussian Mixture Models (GMM). These GMMs are obtained off-line using a large training set comprised of video clips. Subsequently, decoding complexity of a new video clip is estimated by using the available GMM and the content features extracted in real time. A novel method to determine the video decoding capacity of mobile terminals by using a set of subjective decodability experiments that are performed once for each device is also proposed. Finally, the estimated video quality of a content and the decoding capacity of a device are combined in a utility-complexity framework that optimizes complexity-quality trade-off to determine video coding parameters that result in highest video quality without exceeding the hardware capabilities of a client device. The simulation results indicate that this approach is capable of predicting the user viewing satisfaction on a mobile device.
|
Page generated in 0.0777 seconds