• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • 1
  • Tagged with
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Filstorleksoptimering för retuscheringsarbete : Enundersökning med fokus på moderetuschering / File size optimization for retouching : A study with a focus on fashion retouching

Liljengård, Anton January 2017 (has links)
Under bearbetning av bilder idag förekommer ofta stora filer. Med den effektiva teknologiska utvecklingen har efterfrågan på kvalitet växt allt mer. I en värld där fotografens kamera har blivit mer högupplöst har även bilders filstorlek blivit större. Målet med detta examensarbete har varit att komma fram med en rekommendation för hur man arbetar mot en liten filstorlek. Rekommendationen är till för retuschörer som arbetar inom modebranschen och med bilder ämnade för print. Arbetet har försökt åskådliggöra vad under retuscheringens arbetsgång som orsakar en större filstorlek. Detta genom att kontakta retuschörer som ofta arbetar med modebilder. Fokus har legat på lager i Photoshop samt editeringsalternativ för retuschören. Det framkom att retuschörer gjorde liknande åtgärder för att få en liten filstorlek, och att en viss likhet kan urskiljas i deras arbetssätt kring vad som ökade filstorlek. Det framkom även att filstorleken påverkas mest av hur pixellager och masker ser ut, till skillnad från justeringslager. / During the processing of pictures today the file size often becomes large. An effective technological development has made the demand for quality higher. In a world where the photographer's camera has gotten a higher resolution, the image's file size has also increased. The aim of this thesis has been to come up with a recommendation for how to work towards getting a smaller file size. The recommendation was intended for retouchers who work in the fashion industry and with pictures meant for print. The work has dealt with file sizes associated with retouching and have tried to illustrate what during the retouch procedure that causes a larger file size. This has been done by contacting retouchers who often work with fashion images. The focus has been on the layers in Photoshop and editing options for the retoucher. The results showed that the retouchers had similar ways of working towards a small file size, and a certain similarity is apparent in their way of retouching which caused a bigger file size. What also showed was that the file size is the most affected by how layers consisting of pixels and masks look compared to adjustment layers.
2

Data Reduction Methods for Deep Images

Wahlberg, David January 2017 (has links)
Deep images for use in visual effects work during deep compositing tend to be very large. Quite often the files are larger than needed for their final purpose, which opens up an opportunity for optimizations. This research project is about finding methods for identifying redundant and excessive data use in deep images, and then approximate this data by resampling it and representing it using less data. Focus was on maintaining the final visual quality while optimizing the files so the methods can be used in a sharp production environment. While not being very successful processing geometric data, the results when optimizing volumetric data were very succesfull and over the expectations.
3

Hadoop Read Performance During Datanode Crashes / Hadoops läsprestanda vid datanodkrascher

Johannsen, Fabian, Hellsing, Mattias January 2016 (has links)
This bachelor thesis evaluates the impact of datanode crashes on the performance of the read operations of a Hadoop Distributed File System, HDFS. The goal is to better understand how datanode crashes, as well as how certain parameters, affect the  performance of the read operation by looking at the execution time of the get command. The parameters used are the number of crashed nodes, block size and file size. By setting up a Linux test environment with ten virtual machines and Hadoop installed on them and running tests on it, data has been collected in order to answer these questions. From this data the average execution time and standard deviation of the get command was calculated. The network activity during the tests was also measured. The results showed that neither the number of crashed nodes nor block size had any significant effect on the execution time. It also demonstrated that the execution time of the get command was not directly proportional to the size of the fetched file. The execution time was up to 4.5 times as long when the file size was four times as large. A four times larger file did sometimes result in more than a four times as long execution time. Although, the consequences of a datanode crash while fetching a small file appear to be much greater than with a large file. The average execution time increased by up to 36% when a large file was fetched but it increased by as much as 85% when fetching a small file.

Page generated in 0.0754 seconds