Spelling suggestions: "subject:"datorteknik"" "subject:"datortekniks""
611 |
Evaluation of Systematic&Colour Print MottleChristoffersson, Jessica January 2005 (has links)
<p>Print mottle is a problem that has been hassling the printing business for a long time. Along with sharpness and correct colour reproduction, absence of print mottle is one of the most important factors of print quality. The possibility to measure the amount of print mottle (reflectance variation) may in many ways facilitate the development of printing methods. Such a measurement model should preferably follow the functions and abilities of the Human Visual System (HVS). </p><p>The traditional model that STFI-Packforsk has developed to measure print mottle uses frequency analysis to find variations in reflectance. However, this model suffers some limitations since is does not perfectly agree with the functions of the HVS and does only measure variations in lightness. A new model that better follows the functions of the HVS has thus been developed. The new model does not only consider variations in lightness (monochromatic) but also variations in colour (chromatic). The new model also puts a higher weight on systematic variations than on random variations since the human eye is more sensitive to ordered structures. Furthermore, the new model uses a contrast sensitivity function that weights the importance of variations in different frequencies. </p><p>To compare the new model with the traditional STFI model, two tests were carried out. Each test consisted of a group of test patches that were evaluated by the traditional STFI model and the new model. The first test consisted of 15 greyscale test patches that originated from conventional flexo and offset presses. The second test consisted of 24 digitally simulated test patches containing colour mottle and systematic mottle. </p><p>The evaluation results in both the traditional and the new model were compared to the results of a visual evaluation carried out using a panel of test persons. The new model produced a result that correlated considerably better with the visual estimation than what the traditional model did.</p>
|
612 |
INFLOW : Structured Print Job Delivery / INFLOW : strukturerade jobbleveransBuckwalter, Claes January 2003 (has links)
<p>More and more print jobs are delivered from customer to printer digitally over the Internet. Although Internet-based job delivery can be highly efficient, companies in the graphic arts and printing industry often suffer unnecessary costs related to this type of inflow of print jobs to their production workflows. One of the reasons for this is the lack of a well-defined infrastructure for delivering print jobs digitally over the Internet. </p><p>This thesis presents INFLOW - a prototype for a print job delivery system for the graphic arts and printing industry. INFLOW is a web-based job delivery system that is hosted on an Internet-connected server by the organization receiving the print jobs. Focus has been on creating a system that is easy to use, highly customizable, secure, and easy to integrate with existing and future systems from third-party vendors. INFLOW has been implemented using open standards, such as XML and JDF (Job Definition Format). </p><p>The requirements for ease-of-use, high customizability and security are met by choosing a web-based architecture. The client side is implemented using standard web technologies such as HTML, CSS and JavaScript while the serverside is based on J2EE, Java Servlets and Java Server Pages (JSP). Using a web browser as a job delivery client provides a highly customizable user interface and built in support for encrypted file transfers using HTTPS (HTTP over SSL). </p><p>Process automation and easy integration with other print production systems is facilitated with CIP4’s JDF (Job Definition Format). INFLOW also supports"hot folder workflows"for integration with older preflight software and other hot folder-based software common in prepress workflows.</p>
|
613 |
Extraction and Application of Secondary Crease Information in Fingerprint Recognition SystemsHymér, Pontus January 2005 (has links)
<p>This thesis states that cracks and scars, referred to as Secondary Creases, in fingerprint images can be used as means for aiding and complementing fingerprint recognition, especially in cases where there is not enough clear data to use traditional methods such as minutiae based or correlation techniques. A Gabor filter bank is used to extract areas with linear patterns, where after the Hough Transform is used to identify secondary creases in a r, theta space. The methods proposed for Secondary Crease extraction works well, and provides information about what areas in an image contains usable linear pattern. Methods for comparison is however not as robust, and generates False Rejection Rate at 30% and False Acceptance Rate at 20% on the proposed dataset that consists of bad quality fingerprints. In short, our methods still makes it possible to make use of fingerprint images earlier considered unusable in fingerprint recognition systems.</p>
|
614 |
Software Techniques for Distributed Shared MemoryRadovic, Zoran January 2005 (has links)
<p>In large multiprocessors, the access to shared memory is often nonuniform, and may vary as much as ten times for some distributed shared-memory architectures (DSMs). This dissertation identifies another important nonuniform property of DSM systems: <i>nonuniform communication architecture</i>, NUCA. High-end hardware-coherent machines built from large nodes, or from chip multiprocessors, are typical NUCA systems, since they have a lower penalty for reading recently written data from a neighbor's cache than from a remote cache. This dissertation identifies <i>node affinity</i> as an important property for scalable general-purpose locks. Several software-based hierarchical lock implementations exploiting NUCAs are presented and evaluated. NUCA-aware locks are shown to be almost twice as efficient for contended critical sections compared to traditional lock implementations.</p><p>The shared-memory “illusion”' provided by some large DSM systems may be implemented using either hardware, software or a combination thereof. A software-based implementation can enable cheap cluster hardware to be used, but typically suffers from poor and unpredictable performance characteristics.</p><p>This dissertation advocates a new software-hardware trade-off design point based on a new combination of techniques. The two low-level techniques, fine-grain deterministic coherence and synchronous protocol execution, as well as profile-guided protocol flexibility, are evaluated in isolation as well as in a combined setting using all-software implementations. Finally, a minimum of hardware trap support is suggested to further improve the performance of coherence protocols across cluster nodes. It is shown that all these techniques combined could result in a fairly stable performance on par with hardware-based coherence.</p>
|
615 |
Quality Measures of Halftoned Images (A Review)Axelson, Per-Erik January 2003 (has links)
<p>This study is a thesis for the Master of Science degree in Media Technology and Engineering at the Department of Science and Technology, Linkoping University. It was accomplished from November 2002 to May 2003. </p><p>Objective image quality measures play an important role in various image processing applications. In this paper quality measures applied on halftoned images are aimed to be in focus. Digital halftoning is the process of generating a pattern of binary pixels that create the illusion of a continuous- tone image. Algorithms built on this technique produce results of very different quality and characteristics. To evaluate and improve their performance, it is important to have robust and reliable image quality measures. This literature survey is to give a general description in digital halftoning and halftone image quality methods.</p>
|
616 |
Quality Measures of Halftoned Images (A Review)Axelson, Per-Erik January 2003 (has links)
This study is a thesis for the Master of Science degree in Media Technology and Engineering at the Department of Science and Technology, Linkoping University. It was accomplished from November 2002 to May 2003. Objective image quality measures play an important role in various image processing applications. In this paper quality measures applied on halftoned images are aimed to be in focus. Digital halftoning is the process of generating a pattern of binary pixels that create the illusion of a continuous- tone image. Algorithms built on this technique produce results of very different quality and characteristics. To evaluate and improve their performance, it is important to have robust and reliable image quality measures. This literature survey is to give a general description in digital halftoning and halftone image quality methods.
|
617 |
Matting of Natural Image Sequences using Bayesian StatisticsKarlsson, Fredrik January 2004 (has links)
The problem of separating a non-rectangular foreground image from a background image is a classical problem in image processing and analysis, known as matting or keying. A common example is a film frame where an actor is extracted from the background to later be placed on a different background. Compositing of these objects against a new background is one of the most common operations in the creation of visual effects. When the original background is of non-constant color the matting becomes an under determined problem, for which a unique solution cannot be found. This thesis describes a framework for computing mattes from images with backgrounds of non-constant color, using Bayesian statistics. Foreground and background color distributions are modeled as oriented Gaussians and optimal color and opacity values are determined using a maximum a posteriori approach. Together with information from optical flow algorithms, the framework produces mattes for image sequences without needing user input for each frame. The approach used in this thesis differs from previous research in a few areas. The optimal order of processing is determined in a different way and sampling of color values is changed to work more efficiently on high-resolution images. Finally a gradient-guided local smoothness constraint can optionally be used to improve results for cases where the normal technique produces poor results.
|
618 |
Colour proof quality verificationSundell, Johanna January 2004 (has links)
BACKGROUND When a customer delivers a colour proof to a printer, they expect the final print to look similar to that proof. Today it is impossible to control if a match between proof and print is technically possible to reach at all. This is mainly due to the fact that no information regarding the production circumstances of the proof is provided, for instance the printer does not know which proofer, RIP or ICC-profile that was used. Situations where similarity between proof and print cannot be reached and the press has to be stopped are both costly and time consuming and are therefore wished to be avoided. PURPOSE The purpose of this thesis was to investigate the possibility to form a method with the ability control if a proof is of such good quality that it is likely to produce a print that is similar to it. METHOD The basic assumption was that the quality of a proof could be decided by spectrally measuring known colour patches and compare those values to reference values representing the same patches printed at optimal press conditions. To decide which and how many patches that are required, literature and reports were studied, then a test printing and a comparison between proofing systems were performed. To be able to analyse the measurement data in an effective way a tool that analyses the difference between reference and measurement data was developed using MATLAB. RESULT The result was a suggestion for a colour proof quality verification method that consists two parts that are supposed to complement each other.The first one was called Colour proofing system evaluation and is supposed to evaluate entire proofing systems. It consists of a test page containing colour patches, grey balance fields, gradations and photographs. The second part is called Colour proof control and consists of a smaller set of colour patches that is supposed to be attached to each proof. CONCLUSIONS The method is not complete since more research regarding the difference between measurement results and visual impression is needed. To be able to obtain realistic tolerance levels for differences between measurement- and reference data, the method must be tested in every-day production. If this is done the method is thought to provide a good way of controlling the quality of colour proofs.
|
619 |
Feasibility study: Implementation of a gigabit Ethernet controller using an FPGAFält, Richard January 2003 (has links)
Background: Many systems that Enea Epact AB develops for theirs customers communicates with computers. In order to meet the customers demands on cost effective solutions, Enea Epact wants to know if it is possible to implement a gigabit Ethernet controller in an FPGA. The controller shall be designed with the intent to meet the requirements of IEEE 802.3. Aim: Find out if it is feasible to implement a gigabit Ethernet controller using an FPGA. In the meaning of feasible, certain constraints for size, speed and device must be met. Method: Get an insight of the standard IEEE 802.3 and make a rough design of a gigabit Ethernet controller in order to identify parts in the standard that might cause problem when implemented in an FPGA. Implement the selected parts and evaluate the results. Conclusion: It is possible to implement a gigabit Ethernet controller using an FPGA and the FPGA does not have to be a state-of-the-art device.
|
620 |
Mixed RTL and gate-level power estimation with low power design iteration / Lågeffektsestimering på kombinerad RTL- och grind-nivå med lågeffekts design iterationNilsson, Jesper January 2003 (has links)
In the last three decades we have witnessed a remarkable development in the area of integrated circuits. From small logic devices containing some hundred transistors to modern processors containing several tens of million transistors. However, power consumption has become a real problem and may very well be the limiting factor of future development. Designing for low power is therefore increasingly important. To accomplice an efficient low power design, accurate power estimation at early design stage is essential. The aim of this thesis was to set up a power estimation flow to estimate the power consumption at early design stage. The developed flow spans over both RTL- and gate-level incorporating Mentor Graphics Modelsim (RTL-level simulator), Cadence PKS (gate- level synthesizer) and own developed power estimation tools. The power consumption is calculated based on gate-level physical information and RTL- level toggle information. To achieve high estimation accuracy, real node annotations is used together with an own developed on-chip wire model to estimate node voltage swing. Since the power estimation may be very time consuming, the flow also includes support for low power design iteration. This gives efficient power estimation speedup when concentrating on smaller sub- parts of the design.
|
Page generated in 0.0312 seconds