• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 692
  • 169
  • 90
  • 71
  • 64
  • 43
  • 35
  • 24
  • 22
  • 21
  • 17
  • 10
  • 6
  • 6
  • 5
  • Tagged with
  • 1505
  • 143
  • 130
  • 128
  • 124
  • 114
  • 113
  • 96
  • 91
  • 88
  • 81
  • 77
  • 74
  • 73
  • 71
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
191

Application of an improved video-based depth inversion technique to a macrotidal sandy beach

Bergsma, Erwin Willem Johan January 2017 (has links)
Storm conditions are considered the dominating erosional mechanism for the coastal zone. Morphological changes during storms are hard to measure due to energetic environmental conditions. Surveys are therefore mostly executed right after a storm on a local scale over a single or few storms [days to weeks]. The impact of a single storm might depend on the preceding sequence of storms. Here, a video camera system is deployed in the South-West of England at the beach of Porthtowan to observe and assess short-term storm impact and long-term recovery. The morphological change is observed with a state-of-the-art video-based depth estimation tool that is based on the linear dispersion relationship between depth and wave celerity (cBathy). This work shows the first application of this depth estimation tool in a highly energetic macrotidal environment. Within this application two sources of first-order inaccuracies are identified: 1) camera related issues on the camera boundaries and 2) fixed pixel location for all tidal elevations. These systematic inaccuracies are overcome by 1) an adaptive pixel collection scheme and camera boundary solution and 2) freely moving pixels. The solutions together result in a maximum RMS-error reduction of 60%. From October 2013 to February 2015 depths are hourly estimated during daylight. This period included, the 2013-2014 winter season which was the most energetic winter since wave records began. Inter-tidal beach surveys show 200 m3/m erosion while the sub-tidal video derived bathymetries show a sediment loss of around 20 m3/m. At the same time the sub-tidal (outer) bar changes from 3D to linear due to a significant increase in alongshore wave power during storm conditions. Complex-EOF based storm-by-storm impact reveals that the individual storm impact at Porthtowan can be described as a combined function of storm-integrated incident offshore wave power [P] and disequilibrium and that the tidal range has limited effect on the storm impact. The inter- and sub-tidal domain together gain volume over the 2013-2014 winter and the two domains show an interactive inverse behaviour indicating sediment exchange during relatively calm summer conditions. The inter-tidal domain shows accelerated accretion during more energetic conditions in fall 2014. The outer bar slowly migrated onshore until more energetic wave conditions activate the sub-tidal storm deposits and 3 dimensionality is reintroduced. The inter-tidal beach shows full recovery in November 2014, 8 months after the stormy winter.
192

Digital Watermarking for Depth-Image-Based Rendering 3D Images and Its Application to Quality Evaluation

Chen, Lei 10 October 2018 (has links)
Due to the rapid development of 3D display market, the protection and authentication of the intellectual property rights of 3D multimedia has become an essential concern. As a consequence, the digital watermarking for 3D image and video is attracting considerable attention. The depth-image-based rendering (DIBR) technique has been playing a critical role in 3D contents representation because of its numerous advantages. A good digital watermarking algorithm should be robust to various possible attacks, including geometric distortions and compressions. And di erent from ordinary 2D digital watermarking, there are more speci c requirements for 3D watermarking, especially for DIBR 3D image watermarking. Not only the center view, but also the virtual left and right views can be illegally distributed. Therefore, the embedded watermark information should be accurately extracted from these three views individually for content authentication, even under attacks. In this thesis, we focus on the research of digital watermarking and watermarking based quality evaluation for DIBR 3D images. We first present a 2D image and video watermarking method based on contourlet transform, which is then extended to a robust contourlet-based watermarking algorithm for DIBR 3D images. The watermark is embedded into the center view by quantizing certain contourlet coe cients. The virtual left and right views are synthesized from the watermarked center view and the corresponding depth map. One advantage of our algorithm is its simplicity and practicality. However, the performance on watermark extraction needs to be further improved. As an improvement, a blind watermarking algorithm for DIBR 3D images based on feature regions and ridgelet transform is proposed. The watermarked view has good perceptual quality under both the objective and subjective image quality measures. Compared with other related and state-of-the-art methods, the proposed algorithm shows superiority in terms of watermark extraction and robustness to various attacks. Furthermore, as one of the most promising techniques for quality evaluation, a watermarking based quality evaluation scheme is developed for DIBR 3D images. The qualities of the watermarked center view and the synthesized left and right views under distortions can be estimated by examining the degradation of corresponding extracted watermarks. The simulation results demonstrate that our scheme has good performance of quality evaluation for DIBR 3D images under the attacks.
193

Stav a role invazního mlže slávičky mnohotvárné (Dreissena polymorpha) ve vodárenské nádrži Želivka / State and role of invasive zebra mussel (\kur{Dreissena polymorpha}) in the Želivka Reservoir

MERZOVÁ, Martina January 2017 (has links)
The work is focused on the status and role of zebra mussels (Dreissena polymorpha) in the water reservoir Želivka. Literature review summarizes the basic characteristics of living zebra mussels and its impact on the aquatic ecosystem. Current knowledge about this invasive bivalves show that its presence in the water reservoir has certain advantages, but also disadvantages. The main advantage is that zebra as filtrator, increases the transparency of the water and provides food for the animals living in the aquatic environment, or in its vicinity. Disadvantages are then mainly from the economic point of view, the zebra clogs pipes and thus prevents water flow and attached to marker buoys, fishing nets and attaches to the hulls of ships that can be attacked from the inside. In field part population of zebra mussels was sampled in different parts of the reservoir and identified following parameters (temperature and oxygen stratification, coverage to substrate coverage depending on the depth and the substrate, the length of shells according to the horizontal and vertical gradient filtering capacity and the volume rate). The results showed that the incidence and size of shells affects both horizontal and vertical gradient. Zebra mussels occur most depth 1-9 m on rocky, or stony substrate. The greatest incidence was found at Budeč and at Hráz, where the water is less eutrophic than the Zahrádka. Based on these parameters, and literature data was calculated hypothetical filter capacity and discussed its possible impact on the ecosystem components of the reservoir.
194

A Novel Image Retrieval Strategy Based on VPD and Depth with Pre-Processing

Wang, Tianyang 01 August 2015 (has links)
This dissertation proposes a comprehensive working flow for image retrieval. It contains four components: denoising, restoration, color features extraction, and depth feature extraction. We propose a visual perceptual descriptor (VPD) to extract color features from an image. Gradient direction is calculated at each pixel, and the VPD is moved over the entire image to locate regions with similar gradient direction. Color features are extracted only at these pixels. Experiments demonstrate that VPD is an effective and reliable descriptor in image retrieval. We propose a novel depth feature for image retrieval. Regarding any 2D image as the convolution of a corresponding sharp image and a Gaussian kernel with unknown blur amount. Sparse depth map is computed as the absolute difference of the original image and its sharp version. Depth feature is extracted as the nuclear norm of the sparse depth map. Experiments validate the effectiveness of this approach on depth recovery and image retrieval. We present a model for image denoising. A gradient item is incorporated, and can be merged into the original model based on geometric measure theory. Experiments illustrate this model is effective for image denoising, and it can improve the retrieval performance by denoising a query image. A model is proposed for image restoration. It is an extension of the traditional singular value thresholding (SVT) algorithm, addressing the issue that SVT cannot recover a matrix with missing rows or columns. Proposed is a way to fill such rows and columns, and then apply SVT to restore the damaged image. The pre-filled entries are recomputed by averaging its neighboring pixels. Experiments demonstrate the effectiveness of this model on image restoration, and it can improve the retrieval performance by restoring a damaged query image. Finally, the capability of this working flow is tested. Experiments demonstrate its effectiveness in image retrieval.
195

Dynamic equilibrium in limit order markets: analysis of depth disclosure and lit fragmentation

Orellana Alarcón, Rodrigo Ignacio January 2016 (has links)
Magíster en Ciencias de la Ingeniería, Mención Matemáticas Aplicadas / Ingeniero Civil Matemático / Se desarrolla un modelo dinámico a tiempo contínuo que permite el comercio de múltiples mercados financieros interconectados, organizados como limit order markets, en el cual agentes endógenamente toman decisiones óptimas para maximizar el valor esperado de sus ganacias. Los agentes toman sus decisiones considerando incentivos propios, condiciones de mercado, potenciales decisiones de negociación futuras y diferentes estrategias adoptadas por otros agentes. Se concentra el estudio al análisis de divulgación de profundidad y la fragmentación en el contexto de múltiples mercados. Se prueban tres escenarios principales: (i) un único mercado Transparente, (ii) un único mercado Opaco y (iii) un mercado múltiple interconectado entre una bolsa Transparente y una Opaca que comercian el mismo activo. Los resultados principales indican que, en el contexto de un único mercado, la divulgación de profundidad genera una competencia que incrementa el suministro de liquidez y, en consecuencia, reduce el spread, el ruido de mercado e incrementa la profundidad en los precios más competitivos y en el volumen total del libro. Los agentes con una valoración privada absoluta positiva del activo incrementan sus ganancias a costa de los agentes sin valoración privada, al disminuir sus costos de espera y aumentar sus ganancias por transacción. Estos beneficios son amplificados en el contexto de múltiples mercados debido a las restricciones para transar que generan una competencia más agresiva. Se encuentra que hay un flujo de liquidez hacia la componente Transparente debido a los agentes multi mercados proveedores de liquidez, lo cual reduce el spread e incrementa las profundidades. Para mantenerse atractivos, los agentes en la Bolsa Opaca también entran en competencia, lo cual reduce el spread y ruido de mercado en esta bolsa de similar manera. Los agentes multi mercados demandantes de liquidez son los que presentan el mejor rendimiento de todos, principalmente al reducir significativamente los tiempos de sus ejecuciones. / We develop a dynamic model in continuous time to simulate multi markets trading. Traders make endogenously sequential optimal decision to maximize their expected payoffs across different limit order markets, taking into account intrinsic incentives, markets conditions, potential future trading decisions and different strategies adopted by other agents. We focus our study in depth disclosure and lit fragmentation, and test three main scenarios: (i) a single Lit Market, (ii) a single Opaque Market and (iii) Multi Markets interconnected with both Lit and Opaque venues trading a single common asset. Our main results indicate that, in a single market environment, depth disclosure generates a competition that increments liquidity supply and as a consequence, reduces spread, microstructure noise and increases depth at best quotes and total depth of the book. Agents with a positive absolute private valuation of the asset increases their benefits at the expense of agents without a private valuation, by decreasing their waiting costs and increasing their money transfer. These benefits are amplified in a multi market environment due to trading restrictions that generates more aggressive competition. We find a liquidity flow to the Lit venue given by multi market liquidity suppliers, that reduces the spread and increases depths. To stay in competence agents in the Opaque Venue enter the competition as well, reducing spread and microstructure noise in that exchange too. Multi market liquidity demanders with the possibility to trade in both venues have the best performance of all agents, due to a significant reduction in their execution time.
196

Caracterização fitofisionômica em trecho de ocorrência de cerrado no Parque Nacional da Serra da Canastra (MG) e suas interações com a textura, profundidade e umidade do solo

Silva, Marcia Corrêa Vieira da [UNESP] 27 October 2006 (has links) (PDF)
Made available in DSpace on 2014-06-11T19:27:50Z (GMT). No. of bitstreams: 0 Previous issue date: 2006-10-27Bitstream added on 2014-06-13T19:56:42Z : No. of bitstreams: 1 silva_mcv_me_rcl.pdf: 2934502 bytes, checksum: c07584d0ad0307e440d8b7d425454d1d (MD5) / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) / Nas últimas décadas o domínio dos cerrados apresenta-se como um dos mais degradados do território brasileiro. Existe, portanto, a necessidade premente de aprimorar conhecimentos sobre esta formação vegetal, no sentido de fornecer subsídios para práticas preservacionistas das poucas áreas que, ainda mantém a originalidade dos cerrados. A presente proposta tem como objetivos: a identificação das diferentes formações vegetais do cerrado em trecho do Parque Nacional da Serra da Canastra (MG), centradas nas fitofisionomias existentes no bioma; delimitar e mapear na forma de um perfil os padrões vegetacionais da área indicada para a pesquisa; definir as relações existentes entre as diferentes fitofisionomias e a textura, profundidade, umidade, dos solos que dão aporte ao cerrado no trecho eleito para a realização do estudo e por fim, indicar parâmetros com a finalidade de garantir a preservação do bioma, objeto desta investigação sistematizada. A condução técnica da pesquisa será balizada pela conceituação de Goodland e Ferri (1979) que definiu classes fisionômicas que se enquadram num gradiente de quase campina a uma quase floresta (com ocorrência gradual e continua). Tal gradiente de biomassa divide-se em cinco tipos: campo limpo; campo sujo; campo cerrado; cerrado e cerradão. / In the last few decades the province of cerrado it is presented as one of the most degraded in Brazil. It is necessary studies to improve news knowledge on this vegetal formation to supply subsidies for practical that preserve the environment, of the few areas that, still remain the vegetation of cerrado. The proposal has as objective: the identification of the different vegetal formations of cerrado in stretch of Parque Nacional da Serra da Canastra (MG), centered in the existing phytophysiognomies in bioma; to delimit and to mapear in the form of a profile the phytophysiognomies standards of the area indicated for the research; to define the existing relations between the different phytophysiognomy and the texture, depth, humidity, of the ground that give contribution to cerrado in the stretch for the accomplishment of the study and finally to indicate parameters with the purpose to guarantee the preservation of bioma, object of this systemize inquiry. The conduction technique of the research will be delimited by conceptualization of Goodland (1979), that it defined physiognomics classes that if fit in a gradient of almost open savanna to almost forest (with gradual occurrence and continues). Such gradient of biomass is divided in five types: campo limpo; campo sujo; campo cerrado; cerrado e cerradão.
197

Research note: A rock mulch layer supported little vegetation in an arid reclamation setting

Fehmi, Jeffrey S. 07 November 2017 (has links)
Adding a surface rock layer (also called rock armor or rock mulch) to constructed slopes improves erosion resistance but has had mixed effects on revegetation. This study investigated the effects of rock layer depth (no rocks, 10-, 15-, and 20-cm rock layers) and rock size (5-20cm diameter rocks) on vegetation cover. Seeding was applied four times in the first 2 years. After 3 years, plots with a rock layer averaged 7% vegetative cover compared to 85% on plots without a rock layer. There was a nonsignificant trend toward less vegetation with a deeper rock layer.
198

Vinhaça concentrada de cana-de-açúcar: monitoramento das propriedades químicas do solo e mineralização líquida de nitrogênio / Concentrated vinasse from sugarcane: monitoring of soil chemical properties and net nitrogen mineralization

Alinne da Silva 16 July 2012 (has links)
A geração de grande quantidade de vinhaça resultante da produção de etanol induziu questionamentos a respeito de sua disposição e possível armazenamento. No Estado de São Paulo, a norma P4.231 da CETESB restringiu a aplicação de vinhaça em muitos solos, resultando na necessidade de distribuí-la em áreas distantes da usina. Porém, devido a grande quantidade de água no resíduo, o transporte torna-se inviável do ponto de vista econômico. Uma alternativa para diminuir os custos de transporte é reduzir o volume por evaporação, originando a vinhaça concentrada (VC). No entanto, as temperaturas altas durante o processo de concentração provocam modificações na matéria orgânica, consequentemente, alterando a dinâmica das transformações do N no solo e, considerando ainda que a sua aplicação é realizada na linha de plantio, diferentemente da vinhaça não concentrada (VNC) que é aplicada em área total, pouco se conhece sobre os efeitos no solo e a sua eficiência agronômica. Com o objetivo de (1) conhecer algumas propriedades físico-químicas da VC e compará-la com a VNC, (2) avaliar os efeitos das doses de VC na fertilidade do solo, percolação de íons, produtividade de soqueira de cana-de-açúcar e qualidade tecnológica dos colmos e (3) determinar a mineralização líquida do N (Nm) foram: (I) coletadas amostras de VC e VNC em duas usinas entre os anos de 2010 e 2011, (II) realizado um experimento na cidade de Batatais, SP, em uma área comercial de cana-de-açúcar, (III) conduzido um experimento de incubação aeróbia com os objetivos de determinar o Nm e, através do ajuste a equação de primeira ordem, determinar a mineralização potencial do N (N0), a constante de mineralização do N (k) e (4) parametrizar o módulo SoilN do modelo APSIM para solo que recebeu aplicação de vinhaça. Verificou-se que o processo de concentração da vinhaça promove grande variação no teor de alguns elementos, especialmente o Na+ e as formas de nitrogênio amoniacal e nítrica. A aplicação de 30 m3 ha-1 de VC na linha da cana promoveu aumento nas concentrações de Cl-, NO3 -, Ca+2, Mg+2 e SO4 -2 nas soluções coletadas pelos extratores a 0,80 m de profundidade. De maneira geral foi observado aumento do pH, CTC e dos teores dos cátions trocáveis em função das doses, ao mesmo tempo que os valores de m% decresceram, resultando dessa forma em maior disponibilidade de nutrientes, aumento da fertilidade do solo e consequentemente aumento de produtividade, pois os tratamentos com VC promoveram ganhos de 8 Mg ha-1 em relação ao tratamento controle e, mesmo pela grande quantidade de K+ concentrado na linha de plantio as características tecnológicas dos colmos não foram prejudicadas. Com base nos valores de Nm, N0 e k, pode-se concluir que ocorreu imobilização do N mineral pela biomassa microbiana nos tratamentos com VC. O módulo SoilN do APSIM apresentou bom desempenho em simular a produção de NO3 -, pois a nitrificação calculada diariamente a partir do modelo se ajustou adequadamente aos valores de nitrato observados nas incubações; o modelo foi bem sucedido em calcular as perdas de N no tratamento com a maior dose de VNC a partir dos valores da curva de retenção e do volume de água aplicada no solo. / In São Paulo State, the legislation from CETESB restricts the application of vinasse in many soils, resulting in the need to distribute it in areas distant from the mill. However, due to the large amount of waste water, the transport becomes unfeasible economically. An alternative is to reduce the volume by evaporation, resulting in the concentrated vinasse (CV). But, high temperatures during the evaporation cause changes in the organic matter. So, considering that its application is in the row of sugarcane, unlike the not concentrated vinasse (NCV) which is applied in the total area, the objectives were (1) study some physicochemical properties of the CV and compare it with NCV, (2) evaluate the effects of CV in soil fertility, ions percolation, productivity, and technological quality of stems and (3) evaluate the net N mineralization (Nm). (I) Samples of CV and NCV were collected at two sugarcane mills in 2010 and 2011, (II) an experiment was conducted in São Paulo, in a commercial sugarcane area, (III) an aerobic incubation was conducted to determine Nm, potential N mineralization (N0), rate of N mineralization (k) and (IV) parameterize SoilN APSIM model for soil with vinasse application. It was found that the evaporation of the vinasse leads to greater variation in the concentration of some elements, especially Na+ and ammoniacal and nitrate nitrogen. 30 m3 ha-1 of CV applied in the sugarcane line caused high concentration of Cl-, NO3 -, Ca +2, Mg+2 and SO4 -2 in the soil solutions collected at 0.80 m. There were increase in pH, CEC and the content of the cations, while the values of m% decreased, resulting in availability of nutrient, increasing the soil fertility and productivity. CV promoted gains of 8 Mg ha-1 compared to control and, even the large amount of K+ concentrated in the sugarcane line, the technological characteristics of the stems were not impaired. Based on the values of Nm, k and N0, we can conclude that the mineral N was immobilized by microbial biomass in CV treatments. The APSIM showed good performance to simulate NO3 - production. The nitrification calculated daily from the model was successful to fit to the values observed. The model was successful to calculate N losses in treatment with the highest dose of NCV.
199

Avaliação da profundidade de polimerização através de testes de microdureza de duas resinas compostas de matrizes distintas em diferentes densidades de energia e períodos, utilizando LED como fonte fotoativadora / Evaluation of depth of cure by testing microhardness of two resin composites of different matrices in different energy densities and periods, using LED as a source photopolymerization

Guilherme Saintive Cardia 31 May 2011 (has links)
O presente estudo objetivou avaliar a profundidade de polimerização através de testes de microdureza em dois tipos de resina composta com matrizes distintas utilizando um aparelho fotoativador do tipo LED, empregando-se diferentes densidades de energia, sendo avaliados no período inicial (baseline) e após 7 dias. O método utilizado foi o teste de dureza Knoop em amostras de dois compósitos fotoativáveis (P 90, 3M/ESPE, cor A2; Z 250, 3M/ESPE, cor A2). Os corpos-de-prova foram subdivididos em 8 grupos (n=10), metade realizado com cada resina testada. Cada grupo foi definido através de diferentes densidades de energia (600 mW/cm2 x 40s; 1000 mW/cm2 x 40s; 1000 mW/cm2 x 20s; 1400 mW/cm2 x 20s), cada corpo-de- prova obtinha 4 medidas, diferentes profundidades (0, 1, 2 e 3 mm). Cada corpo-de-prova foi confeccionado com o auxilio de uma matriz metálica, constituída por 3 partes, superior, intermediaria e inferior, cada uma com 1 mm de espessura e orifício central com diâmetro de 5 mm. Sobre cada parte preenchida com resina, era colocado um pedaço de tira de poliéster, sobre o qual, com uma lâmina de vidro era exercida uma pressão para promover uma superfície lisa e plana. A unidade fotoativadora utilizada foi LED, para conseguir as diferentes densidades de potência, anéis de plástico foram acoplados na ponta do aparelho fotoativador. A fotoativação foi realizada com a ponta da unidade fotoativadora em contato com a tira de poliéster colocada na parte superior do corpo-de-prova (0 mm). Assim , cada espécime era composto por três partes, cada uma com 1 mm de espessura. Logo após a fotoativação, as partes dos espécimes eram separadas e realizavam-se 5 impressões de dureza Knoop (dureza inicial), com carga de 50g durante 30s, em 4 superfícies, que eram: 1ª) superfície superior da parte superior do espécime, voltada para a fonte de luz (0 mm); 2ª) superfície inferior da parte superior do espécime (1 mm); 3ª) superfície inferior da parte intermediária do espécime (2 mm); e 4ª) superfície inferior da parte inferior do espécime (3 mm). Após 7 dias de estocagem numa estufa a 37ºC, novas leituras de dureza eram realizadas (dureza final). Verificou-se que: 1) A resina composta P 90 obteve menores médias dos valores de dureza Knoop em relação à outra resina composta estudada, Z 250; 2) Quanto maior a profundidade, menores alores de dureza Knoop foram mensurados; 3) Valores de dureza Knoop final foram maiores após 7 dias em comparação a dureza Knoop inicial; 4) De acordo com o aumento da densidade de potência, maiores médias dos valores de dureza Knoop foram mensurados. Esses resultados permitem concluir que: 1) A resina melhora sua propriedade de dureza num período de 7 dias; 2) Quanto maior a profundidade, menor a dureza; 3) quanto maior a densidade de potência utilizada neste trabalho, maior dureza foi avaliado; 4) a resina P 90 obteve valores inferiores a resina Z 250 avaliadas nas mesmas condições. / This study aimed to evaluate the depth of cure through microhardness tests on two types of resin composite with different matrices using LED curing unit, using different energy densities and are assessed during the initial (baseline) and after 7 days. Knoop hardness test was performed in samples made of two light cured composites (P 90, 3M/ESPE, A2; Z 250, 3M/ESPE, A2). The specimens were divided into 8 groups (n=10), and half of them were made with each composite resin tested. Each group was defined by different power densities (600 mW/cm2 x 40s, 1000 mW/cm2 x 40s, 1000 mW/cm2 x 20s; mW/cm2 1400 x 20s), each specimens obtained four measures, different depths (0, 1, 2 and 3 mm). Each specimens was made with the aid of a metal matrix, consisting of 3 parts, superior, intermediate and bottom, each with 1 mm thickness and center hole diameter of 5 mm. On each side filled with resin, was placed a piece of polyester strip, on which, with a glass slide was exerted pressure to promote a flat surface. A unit LED curing was used and to achieve different power densities, plastic rings were attached at the tip of the curing unit. The polymerization was carried out with the tip of the curing unit in contact with the polyester strip placed on top of the specimen (0 mm). Thus, each specimen was composed of three parts, each part with 1 mm thick. Immediately after curing, the parts of the specimens were separated and held up five impressions of Knoop hardness (initial hardness), with a load of 50g for 30s on 4 surfaces, which were: 1) the upper surface of the upper specimen directed to the light source (0 mm), 2) lower surface of the upper specimen (1 mm); 3) lower surface of the middle section of the specimen (2 mm) and 4) lower surface of the bottom of the specimen (3 mm). After 7 days of storage in an oven at 37°C, new hardness re adings were made (final hardness). It was found that: 1) The composite P 90 had lower average values of Knoop hardness in relation to the other studied composite, Z 250, 2) The greater the depth, lower Knoop hardness values were measured, 3) Final values of Knoop hardness were higher after 7 days compared with initial hardness, 4) In accordance with the increase of power density, higher average values of Knoop hardness were measured. These results indicate that: 1) The resin improves the hardness property over a period of 7 days, 2) The greater the depth, the lower the hardness, 3) the higher the power density used in this study, higher hardness was evaluated; 4 ) P 90 resin obtained values less than Z 250 resin, evaluated under the same conditions.
200

An evaluation of local two-frame dense stereo matching algorithms

Van der Merwe, Juliaan Werner 06 June 2012 (has links)
M. Ing. / The process of extracting depth information from multiple two-dimensional images taken of the same scene is known as stereo vision. It is of central importance to the field of machine vision as it is a low level task required for many higher level applications. The past few decades has witnessed the development of hundreds of different stereo vision algorithms. This has made it difficult to classify and compare the various approaches to the problem. In this research we provide an overview of the types of approaches that exist to solve the problem of stereo vision. We focus on a specific subset of algorithms, known as local stereo algorithms. Our goal is to critically analyse and compare a representative sample of local stereo algorithm in terms of both speed and accuracy. We also divide the algorithms into discrete interchangeable components and experiment to determine the effect that each of the alternative components has on an algorithm’s speed and accuracy. We investigate even further to quantify and analyse the effect of various design choices within specific algorithm components. Finally we assemble all of the knowledge gained through the experimentation to compose and optimise a novel algorithm. The experimentation highlighted the fact that by far the most important component of a local stereo algorithm is the manner in which it aggregates matching costs. All of the top performing local stereo algorithms dynamically define the shape of the windows over which the matching costs are aggregated. This is done in a manner that aims to only include pixels in a window that is likely to be at the same depth as the depth of the centre pixel of the window. Since the depth is unknown, the cost aggregation techniques use colour and proximity information to best guess whether pixels are at the same depth when defining the shape of the aggregation windows. Local stereo algorithms are usually less accurate than global methods but they are supposed to be faster and more parallelisable. These cost aggregation techniques result in very accurate depth estimates but unfortunately they are also very expensive computationally. We believe the focus of local stereo algorithm development should be speed. Using the experimental results we developed an algorithm that achieves accuracies in the same order of magnitude as the state-of-the-art algorithms while reducing the computation time by over 50%.

Page generated in 0.0689 seconds