Spelling suggestions: "subject:"[een] POST PROCESSING"" "subject:"[enn] POST PROCESSING""
51 |
Analysis of High Fidelity Turbomachinery CFD Using Proper Orthogonal DecompositionSpencer, Ronald Alex 01 March 2016 (has links)
Assessing the impact of inlet flow distortion in turbomachinery is desired early in the design cycle. This thesis introduces and validates the use of methods based on the Proper Orthogonal Decomposition (POD) to analyze clean and 1/rev static pressure distortion simulation results at design and near stall operating condition. The value of POD comes in its ability to efficiently extract both quantitative and qualitative information about dominant spatial flow structures as well as information about temporal fluctuations in flow properties. Observation of the modes allowed qualitative identification of shock waves as well as quantification of their location and range of motion. Modal coefficients revealed the location of the passage shock at a given angular location. Distortion amplification and attenuation between rotors was also identified. A relationship was identified between how distortion manifests itself based on downstream conditions. POD provides an efficient means for extracting the most meaningful information from large CFD simulation data. Static pressure and axial velocity were analyzed to explore the flow physics of 3 rotors of a compressor with a distorted inlet. Based on the results of the analysis of static pressure using the POD modes, it was concluded that there was a decreased range of motion in passage shock oscillation. Analysis of axial velocity POD modes revealed the presence of a separated region on the low pressure surface of the blade which was most dynamic in rotor 1. The thickness of this structure decreased in the near stall operating condition. The general conclusion is made that as the fan approaches stall the apparent effects of distortion are lessened which leads to less variation in the operating condition. This is due to the change in operating condition placing the fan at a different position on the speedline such that distortion effects are less pronounced. POD modes of entropy flux were used to identify three distinct levels of entropy flux in the blade row passage. The separated region was the region with the highest entropy due to the irreversibilities associated with separation.
|
52 |
Fiber post-processing and atomic spectroscopy for the development of atomic-vapour photonic microcell / Post traitement de fibres creuses et spectroscopie atomique pour le développement de microcapsule photonique et atomiqueZheng, Ximeng 18 July 2017 (has links)
Cette thèse concerne la spectroscopie atomique pour le développement de microcellules photoniques à base de vapeur atomique alcaline (PMC). Le travail est motivé par reproduire les performances remarquables obtenues dans les domaines des références de fréquences et de l’optique cohérente en environnement laboratoire et à les transférer dans des dispositifs très compacts et autonomes accessibles à une communauté scientifique plus large ou à un marché commercial. Dans notre cas, ces futurs composants seront basés sur une fibre à cristal photonique à coeur creux (HC-PCF) remplie d'un matériau en phase gazeuse pour former la PMC et se distingue par une longueur d'interaction ultra longue associée à des dimensions modales transverses micrométriques. Cependant, cette échelle micrométrique du coeur creux de la fibre contenant les atomes soulève plusieurs défis techniques et scientifiques. Parmi les défis techniques, nous énumérons le développement d'un processus efficace pour le chargement d'atomes dans une telle fibre optique, la suppression ou l'atténuation de la réactivité physio-chimique des atomes (c'est-à-dire le rubidium) avec la surface interne silice entourant le coeur de la fibre, etc... En parallèle, le rapport large surface / volume du coeur de la fibre soulève des questions comme la dynamique de relaxation de la cohérence et la nature et l'effet de l'interaction atome-surface. Ainsi, les travaux de thèse reposent sur l’utilisation de revêtements spécifiques de la surface interne du coeur de la fibre avec différents matériaux pour atténuer ces réactions physico-chimiques, sur l'amincissement des larges coeurs creux des HC-PCF Kagomé à couplage inhibé et sur une technique de soudure qui garantit de faibles pertes d’insertion et l’absence de réactivité avec les atomes. Parallèlement, la thèse rapporte un ensemble d'expériences de spectroscopie pour évaluer la dynamique de relaxation des atomes à l'intérieur des HC-PCF et l’observation de nouvelles transparences sous-Doppler. / This thesis reported on atomic spectroscopy for the development of alkaline atomic vapor photonic microcell (PMC). The work is motivated by reproducing the outstanding laboratory environment based performances achieved in the fields of frequency standard and coherent optics in highly compact and stand-alone devices that can be accessible to a wider scientific community or to a commercial market. In our case these future devices will be based a Hollow-core photonic crystal fiber (HC-PCF) filled with a gas phase material to form a PMC, and outstands with an ultra-long long interaction length and micrometric modal area. However, the micrometric scale of the fiber core harboring the atoms raises several technical and scientific challenges. Among the technical challenges, we list the development of efficient process for atom loading inside long hollow fiber with small core diameter, the suppression or mitigation of physio-chemical reactivity of the atoms (i.e. Rubidium) with the fiber core inner-surface silica etc. In parallel, the large surface-to-volume ratio of the fiber-core raises questions like the coherence relaxation dynamics and the nature and effect of the atom-surface interaction. The thesis reports on fiber-core inner surface coating with different materials to mitigate the physio-chemical reactions of the confined atoms with the surface, on tapering large core inhibited coupling Kagome HC-PCF, and splicing technique that ensures low splice loss and no atomic reactivity during the splicing process. In parallel, the thesis reports on a set of spectroscopy experiments to assess the relaxation dynamics of atoms inside HC-PCF and to report on novel sub-Doppler transparencies.
|
53 |
Application of high pressure processing for extending the shelf-life of fresh lactic curd cheeseDaryaei, Hossein, s3088498@student.rmit.edu.au January 2008 (has links)
Outgrowth of spoilage yeasts and moulds and post-processing acidification can limit the shelf-life of some fermented dairy products including fresh lactic curd cheeses. The possibility of using high pressure processing (HPP) for controlling these problems was investigated in a commercially manufactured fresh lactic curd cheese (pH 4.3-4.4) and fermented milk models (pH 4.3-6.5). The effects of HPP at 300 and 600 MPa on inactivation of glycolytic enzymes of lactic acid bacteria were also evaluated. Fresh cheeses made from pasteurised bovine milk using a commercial Lactococcus starter preparation were treated with high pressures ranging from 200 to 600 MPa (less than or equal to 22°C, 5 min) under vacuum packaging conditions and subsequently stored at 4°C for 8 weeks. Treatment at greater than or equal to 300 MPa substantially reduced the viable count of Lactococcus and effectively prevented the outgrowth of yeasts and moulds for 6 to 8 weeks without adversely affecting the sensory and textural attributes of the product. However, it had no significant effects (p less than 0.01) on variation of titratable acidity during storage. Fermented milk models were prepared by individually growing Lactococcus lactis subsp. lactis C10, Lactococcus lactis subsp. cremoris BK5, Streptococcus thermophilus TS1, Lactobacillus acidophilus 2400 and Lactobacillus delbrueckii subsp. bulgaricus 2517 in UHT skim milk and diluting the resulting fermented milk with UHT skim milk up to pH 6.5. Pressure treatment of the milk models at pH 5.2 resulted in substantial inhibition of post-processing acidification during storage and markedly reduced the viable count of Lactococcus at both 300 and 600 MPa and other bacteria only at 600 MPa. Treatment of the milk model at 600 MPa decreased the viable counts of Candida zeylanoides and Candida lipolytica (wildtype spoilage yeasts of lactic curd cheese, added as challenge cultures) from 105 CFU mL-1 to below the detection limit (log 0 CFU mL-1) at all pH levels tested (pH 4.3-6.5) and effectively controlled their outgrowth for 8 weeks. Treatment of milk model at 300 MPa had a similar effect only on C. zeylanoides. The viable count of C. lipolytica was reduced by 2.6, 2.4 and 2.3 logs by treatment at 300 MPa at pH levels of 4.3, 5.2 and 6.5, respectively, which subsequently recovered by 2.9, 2.8 and 3.2 logs within 3 weeks. Glycolytic enzymes of various starter bacteria showed different responses to pressure treatment. The lactate dehydrogenase in L. lactis subsp. lactis and Lb. acidophilus was quite resistant to pressures up to 600 MPa, but it was almost completely inactivated in S. thermophilus at pressure levels as low as 300 MPa. The â-galactosidase in Lb. acidophilus was more pressure stable than â-galactosidase in S. thermophilus and Phospho-â-galactosidase in L. lactis subsp. lactis. The findings of this study suggests HPP at 300-600 MPa as an effective method for controlling the outgrowth of some spoilage yeasts and moulds in fresh lactic curd cheeses. The results obtained with selected lactic acid bacteria in fermented milk models can be used to assist in establishing HPP operating parameters for development of new generation cultured dairy products, of reduced acidity and extended shelf-life.
|
54 |
Novel Laser Based NiTi Shape Memory Alloy Processing Protocol for Medical Device ApplicationsPequegnat, Andrew 31 March 2014 (has links)
The unique performance offerings of NiTi based shape memory alloys (SMAs), which includes the shape memory effect (SME), pseudoelasticity (PE) and biocompatibility have led to widespread acceptance of these alloys as valuable engineering materials. Over the past several decades the complex metallurgy behind the SME and PE properties has for the most part been uncovered and the design and engineering knowhow has been demonstrated; facilitating successful application of NiTi devices in numerous industries. Specifically, more mature applications in the medical industry including medical devices such as, catheters, guide wires, orthodontic arch wires, maxillofacial reconstruction implants, minimally invasive surgical tools, and arterial and gastrointestinal stents, have become common practice in modern medicine. Recently however, there has been a drive for more demanding functionality of SMAs for example to locally modify properties creating tuneable or gradient SME and PE performance. Unique processing protocols are therefore necessary to meet these demands and allow SMAs to reach their full potential in a wider range of applications. The current thesis successfully details the application of pulsed Nd:YAG laser processing along with post-processing techniques to locally tune both the SME and PE functional properties of monolithic binary NiTi wires and strip, while maintaining confidence in the retained corrosion performance and limited release of biologically harmful Ni ions. This extensive study contains three distinct parts which include: i) application of a laser induced vaporization protocol to locally embed multiple memories in a monolithic wire actuator; ii) uncovering the process, structure, and performance relationship of combined laser, cold working, and heat treatment processes; and iii) comprehensive characterization of surface characteristics and their relationship with corrosion performance and Ni ion release from laser processed material.
|
55 |
Metodologia de análise estrutural e pós-processamento a partir de simulações do comportamento de sistemas oceânicos. / Methodology of structural analysis and post-processing from offshore system simulations.Henrique Murilo Gaspar 28 June 2007 (has links)
Este trabalho apresenta uma metodologia capaz de unir a análise hidrodinâmica de um sistema oceânico com sua análise estrutural, assim como o pós-processamento acoplado dos resultados. Foram criadas rotinas e códigos para que a série temporal de forças das linhas de risers e amarração de uma plataforma pudessem tornar-se dados passíveis de entrada num pré- processador de elementos finitos. Com a aplicação destas no modelo, e sua conseqüente análise no domínio do tempo, foi criada uma interface para os resultados do solver, para que pudesse ser importados no pós-processador hidrodinâmico, e visualizados com os mesmos movimentos que os obtidos na resposta da análise hidrodinâmica. O TPNView, atual pós-processador do laboratório Tanque de Provas Numérico(TPN), foi quem recebeu por fim as rotinas e interfaces criadas a partir das idéias apresentadas nesta dissertação. Com isso é possível ver em uma única ferramenta de visualização tanto o comportamento hidrodinâmico quanto o estrutural de uma estrutura do sistema de uma só vez.. / This work presents a methodology developed to treat the hydrodynamic analysis of an offshore system conjointly with its structural analysis; the same methodology also allows for combined post-processing of data. Programming routines were created so as to enable the use of the time series of the forces present at the risers and mooring lines as input data for a finite element analysis solver software. Applying this forces in to the finite element model, and its subsequent analysis in time domain, it was possible to create an interface between the solver output, so that structural analysis could be imported into the hydrodynamic post-processor and visualised with the same movements obtained in the hydrodynamic analysis response. TPNView, the post-processor developed at the Tanque de Provas Numérico laboratory, was benefited from the programming routines and interfaces developed for this thesis. Using the aforedescribed visualisation tools, it became possible to monitor at once both the hydrodynamic and the structural behaviour of a system component.
|
56 |
Compositional Decompilation using LLVM IREklind, Robin January 2015 (has links)
Decompilation or reverse compilation is the process of translating low-level machine-readable code into high-level human-readable code. The problem is non-trivial due to the amount of information lost during compilation, but it can be divided into several smaller problems which may be solved independently. This report explores the feasibility of composing a decompilation pipeline from independent components, and the potential of exposing those components to the end-user. The components of the decompilation pipeline are conceptually grouped into three modules. Firstly, the front-end translates a source language (e.g. x86 assembly) into LLVM IR; a platform-independent low-level intermediate representation. Secondly, the middle-end structures the LLVM IR by identifying high-level control flow primitives (e.g. pre-test loops, 2-way conditionals). Lastly, the back-end translates the structured LLVM IR into a high-level target programming language (e.g. Go). The control flow analysis stage of the middle-end uses subgraph isomorphism search algorithms to locate control flow primitives in CFGs, both of which are described using Graphviz DOT files. The decompilation pipeline has been proven capable of recovering nested pre-test and post-test loops (e.g. while, do-while), and 1-way and 2-way conditionals (e.g. if, if-else) from LLVM IR. Furthermore, the data-driven design of the control flow analysis stage facilitates extensions to identify new control flow primitives. There is huge potential for future development. The Go output could be made more idiomatic by extending the post-processing stage, using components such as Grind by Russ Cox which moves variable declarations closer to their usage. The language-agnostic aspects of the design will be validated by implementing components in other languages; e.g. data flow analysis in Haskell. Additional back-ends (e.g. Python output) will be implemented to verify that the general decompilation tasks (e.g. control flow analysis, data flow analysis) are handled by the middle-end. / <p>BSc dissertation written during an ERASMUS exchange from Uppsala University to the University of Portsmouth.</p>
|
57 |
Apports et difficultés d’une collecte de données à l’aide de récepteurs GPS pour réaliser une enquête sur la mobilité / Contributions and difficulties of data collection using GPS receivers to conduct a survey on mobilityPham, Thi Huong Thao 02 May 2016 (has links)
Les méthodes de collecte de données sur la mobilité basée sur les nouvelles technologies ont évolué au cours des dernières décennies. Le suivi de la mobilité par GPS pour la recherche sur les comportements de déplacement se diffuse, principalement en raison de la couverture mondiale et de la précision du système GPS. Le sous-échantillon d’environ 957 volontaires qui ont accepté de porter un GPS pendant au moins une semaine dans le cadre de l'Enquête Nationale Transport et Déplacements (ENTD) 2007-08 est la première expérience nationale de ce type dans le monde.Cette thèse présente l'intérêt des nouvelles technologies pour les enquêtes de mobilité, leurs avantages et leurs biais. D'abord, en étudiant l'acceptabilité et les biais des enquêtes de mobilité à l'aide de GPS, les résultats confirment que les enquêtés volontaires pour une enquête GPS sont plus consciencieux et décrivent mieux leur mobilité. Ensuite, nous décrivons le profil des enquêtés qui acceptent l’enquête GPS. L’une des raisons principales du faible taux de l’acceptation du GPS dans l’ENTD 2007-2008 est la disponibilité de GPS. Donc pour l'accroissement du taux de disponibilité des GPS, nous avons besoin de calculer les temps et durées des visites de l'enquête pour réduire le taux d’immobilisation forcé des GPS.Indépendamment de ce problème d’acceptabilité du GPS par l’enquêté, un défi dans le traitement a posteriori des données collectées par GPS est la mise au point de méthodes permettant de combler les données manquantes et de reconstituer de manière automatisée des séquences continues, à la fois dans l'espace et dans le temps. Nous présentons l’algorithme de post-traitement et les résultats du logiciel de post-traitement des données GPS de l’ENTD 2007-2008. La validation est effectuée en comparant avec les données collectées par les méthodes classiques, ainsi qu’avec le CAPI-GPS qui fournit des informations supplémentaires sur la fiabilité de l'appareil et des caractéristiques plus détaillées (mode, motif...) pour les déplacements réalisés un jour tiré au sort.Ensuite, nous comparons les descriptions de la mobilité obtenues par deux méthodes, questionnaire classique de l’enquête, d’une part et traces GPS d’autre part, pour les jours couverts par ces deux instruments d’observation. Un panorama de l’appariement des traces GPS et des déplacements quotidiens est effectué en s’appuyant sur leur chronologie de manière automatique mais également par un contrôle manuel. En identifiant les caractéristiques des déplacements quotidiens et traces GPS non appariés, nous estimons les raisons de leur non-appariement.Cette recherche montre que l’enquête à l’aide de GPS peut être utilisée avec succès pour compléter les enquêtes de transport classiques, mais qu’il est encore prématuré d’imaginer la substitution complète des enquêtes classiques de mobilité par l’enquête GPS.Mots-clés : enquête de transports, mobilité, nouvelles technologies, collecte de données, GPS, post-traitement, acceptabilité, acceptation, CAPI- GPS, traces GPS, déplacement quotidien. / Travel survey methods based on new technologies have evolved in the past few decades, shifting from limited experiments to large-scale travel surveys. GPS-based data collection methods have become particularly popular in travel behavior research, mainly because of the worldwide coverage and the accuracy of the GPS system. We have taken the opportunity of the French National Travel Survey (FNTS) to have the first nationwide experience with embedding such a “GPS package” in a traditional survey, with a sub-sample of approximately 957 voluntary respondents.This thesis begins by presenting a review of interest in new technologies for mobility surveys, advantages and limitations. Prior to going into the field with the processing of GPS data, it is important to understand the likely acceptability of GPS device among FNTS respondents – specifically whether a sufficient proportion would be willing to use a GPS device while completing a travel diary and, if not, why not. The findings confirmed that the voluntary respondents with GPS receiver are more conscientious and better describe their mobility. We find the profile of respondents who accept the GPS survey. One of the main reason for low GPS acceptance rate in the FNTS 2007 is the availability of GPS. For increasing the GPS availability rate, we need to calculate the time and duration of the survey visits to reduce the GPS immobilization rate.A challenge in the GPS data post-processing is the development of methods to fill GPS missing data and to reconstitute automatically continuous sequence, both in space and time. Then, the architecture of GPS data post-processing for the FNTS 2007-08 is described. After, the results of the post processing software is calibrated by comparison with conventional methods, such as CAPI-GPS providing for a few trips additional information on the reliability of the device and on more detailed characteristics (e.g. mode and purpose).Next, we provide an overview of comparing the descriptions of mobility obtained by two methods CAPI and GPS in the days covered by these two observation tools. The automatic matching of GPS traces to daily trips based on their chronologies and the manual control of this matching are performed. By identifying the characteristics of unmatched GPS traces and daily trips, we expect the reasons for their mismatches.Finally, we evaluate the contributions and challenges of data collection using GPS device. This study shows that the GPS survey can be used successfully to complete the conventional transport surveys, but it is still too early to predict the complete substitution of conventional survey by the GPS mobility survey.Keywords: transport survey, mobility, new technologies, data collection, GPS, post processing, acceptability, GPS traces, daily trip.
|
58 |
Optimalizace NC programu pomocí CAD/CAM software / Optimization of NC program using CAD/CAM softwarePaseka, Jan January 2014 (has links)
Tendency of this master thesis is a proposal of savings in the process of technological production’s preparation in a manufacturing company. In the first part is elaborated general theoretical study of current components and NC programs. Based on this, and finished analysis was defined optimization’s proposals described in second part. Thanks this complete proposals comes to time and money savings, which are needed for implementation of prototype project into serial production.
|
59 |
Provinsgenerering med postprocessAldabbagh, Haimen January 2014 (has links)
This bachelor's thesis is a part of a larger project carried out at Paradox Interactive. The project aims to improve a map generator for the strategy game Europa Universalis IV. This work addresses the creation and implementation of a province generator that divides a pregenerated landscape in provinces. The provinces in the game are the regions on the game map that the game mechanics are based upon. The improvements that are expected of the new province generator includes the following properties: • The provinces that are created have more logically placed boundaries that are affected by the structure of the landscape. • The program gives the user more control over how the end result should look like by letting the user set the values of input parameters. • The execution should not exceed an approximate time limit. This time limit is set by Paradox Interactive. The work began with research on the topics map creation and map division to give enough knowledge to plan the implementation of the program. The programming language that is used is Java. The implementation of the program is based on many well-known algorithms in which the most remarkable one is Fortune's algorithm which performs the main task of the provincial division of the program, the creation of Voronoi diagrams. The Voronoi diagrams are used to divide the map into regions which by using a post-process results in the creation of the provinces. Other well-known algorithms and methods used or addressed in this report include the Lloyd relaxation, Bresenham's line algorithm, Scan Line Flood Fill, Delaunay triangulation and Bowyer-Watson's algorithm. The result of this work is a Java application that can load a map file containing information of a landscape structure and create a division of provinces with provincial boundaries that depend on the structure of the landscape. The results of the provincial division may be controlled by a number of user defined parameters. The program could not be fully calibrated during the time of the project because the landscape generator was not ready in time to be able to provide a map of a generated landscape. The generated provinces can be saved as an image file on the hard disk. / Kandidatexamensarbetet är en del av ett större projekt som utförs på företaget Paradox Interactive. Projektets mål är att förbättra en kartgenerator för strategispelet Europa Universalis IV. Det här arbetet avser skapandet och implementationen av en provinsgenerator som delar in ett färdiggenererat landskap i provinser. Provinserna i spelet är de landsdelar på kartan som spelmekaniken bygger på. Förbättringarna som förväntas av den nya provinsgeneratorn är bland annat att: • Provinserna som skapas ska ha mer logiska gränser som påverkas av landskapets utformning och inte vara alltför orealistiska. • Ge användaren mer kontroll över hur slutresultatet ska se ut genom användarinmatade parametrar. • Inte överstiga en ungefärlig tidsgräns vid programmets exekvering. Tidsgränsen sätts av Paradox Interactive. Arbetet började med forskning kring ämnena kartgenerering och kartindelning vilket gav tillräckligt med kunskap för att planera hur programmet skulle implementeras. Programmeringsspråket som används är Java. Implementationen av programmet bygger på många kända algoritmer där den mest anmärkningsvärda algoritmen är Fortune's algoritm som utför huvuduppgiften för provinsindelningen i programmet, skapandet av Voronoidiagram. Voronoi-diagramen används för att dela in kartan i ytor som med hjälp av en postprocess resulterar i skapandet av provinserna. Andra kända algoritmer och metoder som används eller tas upp i den här rapporten är bland annat Lloyd relaxation, Bresenham's linjealgoritm, Scanline floodfill, Delaunay triangulering och Bowyer–Watson's algoritm. Resultatet av arbetet är ett Java-program som kan läsa in en kartfil med information om landskapsstruktur och skapa en indelning av provinser med provinsgränser som beror på landskapets utformning. Resultatet av provinsindelningen kan styras med hjälp av ett antal användarinmatade parametrar. Programmet hann inte kalibreras fullt ut under arbetets gång på grund av att landskapsgeneratorn inte blev färdig i tid för att kunna bidra med en genererad landskapskarta. De genererade provinserna kan sparas som en bildfil på hårddisken.
|
60 |
Digital Image Processing Algorithms Research Based on FPGAXu, Haifeng January 2011 (has links)
As we can find through the development of TV systems in America, the digital TV related digital broadcasting is just the road we would walk into. Nowadays digital television is prevailing in China, and the government is promoting the popularity of digital television. However, because of the economic development, analog television will still take its place in the TV market during a long period. But the broadcasting system has not been reformed, as a result, we should not only take use of the traditional analog system we already have, but also improve the quality of the pictures of analog system. With the high-speed development of high-end television, the research and application of digital television technique, the flaws caused by interlaced scan in traditional analog television, such as color-crawling, flicker and fast-moved object's boundary blur and zigzag, are more and more obvious. Therefore the conversion of interlaced scan to progressing scan, which is de-interlacing, is an important part of current television production. At present there are many kinds of TV sets appearing in the market. They are based on digital processing technology and use various digital methods to process the interlaced, low-field rate video data, including the de-interlacing and field rate conversion. The digital process chip of television is the heart of the new-fashioned TV set, and is the reason of visual quality improvement. As a requirement of real time television signal processing, most of these chips has developed novel hardware architecture or data processing algorithm. So far, the most quality effective algorithm is based on motion compensation, in which motion detection and motion estimation will be inevitably involved, in despite of the high computation cost. in video processing chips, the performance and complexity of motion estimation algorithm have a direct impact on speed area and power consumption of chips. Also, motion estimation determined the efficiency of the coding algorithms in video compression. This thesis proposes a Down-sampled Diamond NTSS algorithm (DSD-NTSS) based on New Three Step Search (NTSS) algorithm, taking both performance and complexity of motion estimation algorithms into consideration. The proposed DSD-NTSS algorithm makes use of the similarity of neighboring pixels in the same image and down-samples pixels in the reference blocks with the decussate pattern to reduce the computation cost. Experiment results show that DSD-NTSS is a better tradeoff in the terms of performance and complexity. The proposed DSD-NTSS reduces the computation cost by half compared with NTSS when having the equivalent image quality. Further compared with Four Step Search(FSS) Diamond Search(DS)、Three Step Search(TSS) and some other fast searching algorithms, the proposed DSD-NTSS generally surpasses in performance and complexity. This thesis focuses on a novel computation-release motion estimation algorithm in video post-processing system and researches the FPGA design of the system.
|
Page generated in 0.0496 seconds