• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 64
  • 53
  • 9
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 149
  • 149
  • 65
  • 34
  • 18
  • 17
  • 17
  • 14
  • 11
  • 11
  • 9
  • 9
  • 9
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Aplicação da teoria fractal à quantificação da rugosidade e efeito escala da rugosidade / Fractal theory application to roughness quantification and roughness scale effect

Henry Willy Revilla Amezquita 21 January 2005 (has links)
O objetivo do presente trabalho é a aplicação da teoria fractal na quantificação de perfis de rugosidade de juntas rochosas. Para esta quantificação digitalizaram-se perfis de rugosidade encontrados na literatura e posteriormente determinou-se a dimensão fractal de cada perfil utilizando três métodos. Dentre estes, estabeleceu-se que o método modificado do divisor é o mais adequado para determinar a dimensão fractal. Verificou-se também a importância do parâmetro de intersecção, que também pode quantificar o perfil de rugosidade. De uma análise comparativa se estabeleceu que o parâmetro de intersecção quantifica melhor o perfil que a dimensão fractal. Para uso prático, este parâmetro foi adimensionalizado e o novo parâmetro foi denominado como peso fractal. Este último junto com a dimensão fractal quantificam melhor o perfil de rugosidade. Avaliou-se também o comportamento da dimensão fractal, parâmetro de intersecção e peso fractal no efeito escala da rugosidade. Estes têm uma dependência do comprimento do perfil. / The purpose of the present work is the application of the fractal theory to the quantification of rock joint roughness. Rock joint roughness profiles available in the literature were digitized in order to allow quantitative analysis. The fractal dimension was determined for each profile using three different methods. Among those methods, it was found that the modified divider method is the most adequate. The importance of the intercept parameter was also found for the fractal dimension determination and roughness quantification. Based on a comparative analysis, the intercept parameter was found to be better for roughness quantification than the fractal dimension. For practical purposes, a dimensionless form of the intercept parameter was established. The new parameter was called the fractal weight. The joint use of both fractal dimension and fractal weight was found to be the most effective way to quantify rock joint roughness profiles. The influence of the three mentioned parameters on joint strength scale effect was also analyzed.
42

Fractal pattern recognition and recreation

Lindén, Fredrik January 2012 (has links)
It speaks by itself that in order to find oil, one must know where to look for it. In this thesis I have investigated and created new tools to find salt in the bedrock, and to recreate images according to some parameters, (fractal dimension and lacunarity). The oil prospecting company Schlumberger gathers nowadays a huge amount of seismic information. It is very time consuming to interpret the seismic data by hand. My task is to find a good way to detect salt in the seismic images of the underworld, that can then be used to classify the seismic data. The theory indicates that the salt behaves as fractals, and by studying the fractal dimension and lacunarity we can make a prediction of where the salt can be located. I have also investigated three different recreation techniques, so that one can go from parameters values (fractal dimension and lacunarity) back to a possible recreation. It speaks by itself that in order to find oil, one must know where to look for it. In this thesis I have investigated and created new tools to find salt in the bedrock, and to recreate images according to some parameters, (fractal dimension and lacunarity). The oil prospecting company Schlumberger gathers nowadays a huge amount of seismic information. It is very time consuming to interpret the seismic data by hand. My task is to find a good way to detect salt in the seismic images of the underworld, that can then be used to classify the seismic data. The theory indicates that the salt behaves as fractals, and by studying the fractal dimension and lacunarity we can make a prediction of where the salt can be located. I have also investigated three different recreation techniques, so that one can go from parameters values (fractal dimension and lacunarity) back to a possible recreation.
43

Processing-performance relationships for fibre-reinforced composites

Mahmood, Amjed Saleh January 2016 (has links)
The present study considers the dependence of mechanical properties in composite laminates on the fibre architecture. The objective is to characterise the mechanical properties of composite plates while varying the fibre distribution but keeping the constituent materials unchanged. Image analysis and fractal dimension have been used to quantify fibre distribution and resin-rich volumes (RRV) and to correlate these with the mechanical properties of the fibre-reinforced composites. The formation, shape and size of RRV in composites with different fabric architectures is discussed. The majority of studies in literatures show a negative effect of the RRV on the mechanical behaviour of composite materials. RRV arise primarily as a result of (a) the clustering of fibres as bundles in textiles, (b) the stacking sequence, and/ or stacking process, (c) the resin properties and flow characteristics, (d) the heating rate as this directly affects viscosity and (e) the consolidation pressure. Woven glass and carbon/epoxy fabric composites were manufactured either by the infusion or the resin transfer moulding (RTM) process. The fractal dimension (D) has been employed to explore the correlation between fabric architecture and mechanical properties (in glass or/ carbon fibre reinforced composites with different weave styles and fibre volume fraction). The fractal dimension was determined using optical microscopy images and ImageJ with FracLac software, and the D has been correlated with the flexural modulus, ultimate flexural strength (UFS), interlaminar shear strength (ILSS) and the fatigue properties of the woven carbon/epoxy fabric composites. The present study also considers the dependence of fatigue properties in composite laminates on static properties and fibre architecture. Four-point flexural fatigue test was conducted under load control, at sinusoidal frequency of 10 Hz with amplitude control. Using a stress ratio (R=σmin/σmax) of 0.1 for the tension side and 10 for the compression side, specimens were subjected to maximum fatigue stresses of 95% to 82.5% step 2.5% of the ultimate flexural strength (UFS). The fatigue data were correlated with the static properties and the fibre distribution, in order to obtain a useful general description of the laminate behaviour under flexural fatigue load. The analysis of variance (ANOVA) technique was applied to the results obtained to identify statistically the significance of the correlations. Composite strength and ILSS show a clear dependence on the fibre distribution quantified using D. For the carbon fabric architectures considered in this study, the fatigue properties of composite laminates have significant correlations with the fibre distribution and the static properties of the laminates. The loss of 5-6 % in the flexural modulus of composite laminates indicates an increasing risk of failure of the composite laminates under fatigue loads. The endurance limits, based on either the static properties or the fibre distribution, were inversely proportional to the strength for all laminates.
44

Intelligent Recognition of Texture and Material Properties of Fabrics

Wang, Xin January 2011 (has links)
Fabrics are unique materials which consist of various properties affecting their performance and end-uses. A computerized fabric property evaluation and analysis method plays a crucial role not only in textile industry but also in scientific research. An accurate analysis and measurement of fabric property provides a powerful tool for gauging product quality, assuring regulatory compliance and assessing the performance of textile materials. This thesis investigated the solutions for applying computerized methods to evaluate and intelligently interpret the texture and material properties of fabric in an inexpensive and efficient way. Firstly, a method which allows automatic recognition of basic weave pattern and precisely measuring the yarn count is proposed. The yarn crossed-areas are segmented by a spatial domain integral projection approach. Combining fuzzy c-means (FCM) and principal component analysis (PCA) on grey level co-occurrence matrix (GLCM) feature vectors extracted from the segments enables to classify detected segments into two clusters. Based on the analysis on texture orientation features, the yarn crossed-area states are automatically determined. An autocorrelation method is used to find weave repeats and correct detection errors. The method was validated by using computer simulated woven samples and real woven fabric images. The test samples have various yarn counts, appearance, and weave types. All weave patterns of tested fabric samples are successfully recognized and computed yarn counts are consistent to the manual counts. Secondly, we present a methodology for using the high resolution 3D surface data of fabric samples to measure surface roughness in a nondestructive and accurate way. A parameter FDFFT, which is the fractal dimension estimation from 2DFFT of 3D surface scan, is proposed as the indicator of surface roughness. The robustness of FDFFT, which consists of the rotation-invariance and scale-invariance, is validated on a number of computer simulated fractal Brownian images. Secondly, in order to evaluate the usefulness of FDFFT, a novel method of calculating standard roughness parameters from 3D surface scan is introduced. According to the test results, FDFFT has been demonstrated as a fast and reliable parameter for measuring the fabric roughness from 3D surface data. We attempt a neural network model using back propagation algorithm and FDFFT for predicting the standard roughness parameters. The proposed neural network model shows good performance experimentally. Finally, an intelligent approach for the interpretation of fabric objective measurements is proposed using supported vector machine (SVM) techniques. The human expert assessments of fabric samples are used during the training phase in order to adjust the general system into an applicable model. Since the target output of the system is clear, the uncertainty which lies in current subjective fabric evaluation does not affect the performance of proposed model. The support vector machine is one of the best solutions for handling high dimensional data classification. The complexity problem of the fabric property has been optimally dealt with. The generalization ability shown in SVM allows the user to separately implement and design the components. Sufficient cross-validations are performed and demonstrate the performance test of the system.
45

Morphology-based Fault Feature Extraction and Resampling-free Fault Identification Techniques for Rolling Element Bearing Condition Monitoring

SHI, Juanjuan January 2015 (has links)
As the failure of a bearing could cause cascading breakdowns of the mechanical system and then lead to costly repairs and production delays, bearing condition monitoring has received much attention for decades. One of the primary methods for this purpose is based on the analysis of vibration signal measured by accelerometers because such data are information-rich. The vibration signal collected from a defective bearing is, however, a mixture of several signal components including the fault-generated impulses, interferences from other machine components, and background noise, where fault-induced impulses are further modulated by various low frequency signal contents. The compounded effects of interferences, background noise and the combined modulation effects make it difficult to detect bearing faults. This is further complicated by the nonstationary nature of vibration signals due to speed variations in some cases, such as the bearings in a wind turbine. As such, the main challenges in the vibration-based bearing monitoring are how to address the modulation, noise, interference, and nonstationarity matters. Over the past few decades, considerable research activities have been carried out to deal with the first three issues. Recently, the nonstationarity matter has also attracted strong interests from both industry and academic community. Nevertheless, the existing techniques still have problems (deficiencies) as listed below: (1) The existing enveloping methods for bearing fault feature extraction are often adversely affected by multiple interferences. To eliminate the effect of interferences, the prefiltering is required, which is often parameter-dependent and knowledge-demanding. The selection of proper filter parameters is challenging and even more so in a time-varying environment. (2) Even though filters are properly designed, they are of little use in handling in-band noise and interferences which are also barriers for bearing fault detection, particularly for incipient bearing faults with weak signatures. (3) Conventional approaches for bearing fault detection under constant speed are no longer applicable to the variable speed case because such speed fluctuations may cause “smearing” of the discrete frequencies in the frequency representation. Most current methods for rotating machinery condition monitoring under time-varying speed require signal resampling based on the shaft rotating frequency. For the bearing case, the shaft rotating frequency is, however, often unavailable as it is coupled with the instantaneous fault characteristic frequency (IFCF) by a fault characteristic coefficient (FCC) which cannot be determined without knowing the fault type. Additionally, the effectiveness of resampling-based methods is largely dependent on the accuracy of resampling procedure which, even if reliable, can complicate the entire fault detection process substantially. (4) Time-frequency analysis (TFA) has proved to be a powerful tool in analyzing nonstationary signal and moreover does not require resampling for bearing fault identification. However, the diffusion of time-frequency representation (TFR) along time and frequency axes caused by lack of energy concentration would handicap the application of the TFA. In fact, the reported TFA applications in bearing fault diagnosis are still very limited. To address the first two aforementioned problems, i.e., (1) and (2), for constant speed cases, two morphology-based methods are proposed to extract bearing fault feature without prefiltering. Another two methods are developed to specifically handle the remaining problems for the bearing fault detection under time-varying speed conditions. These methods are itemized as follows: (1) An efficient enveloping method based on signal Fractal Dimension (FD) for bearing fault feature extraction without prefiltering, (2) A signal decomposition technique based on oscillatory behaviors for noise reduction and interferences removal (including in-band ones), (3) A prefiltering-free and resampling-free approach for bearing fault diagnosis under variable speed condition via the joint application of FD-based envelope demodulation and generalized demodulation (GD), and (4) A combined dual-demodulation transform (DDT) and synchrosqueezing approach for TFR energy concentration level enhancement and bearing fault identification. With respect to constant speed cases, the FD-based enveloping method, where a short time Fractal dimension (STFD) transform is proposed, can suppress interferences and highlight the fault-induced impulsive signature by transforming the vibration signal into a STFD representation. Its effectiveness, however, deteriorates with the increased complexity of the interference frequencies, particularly for multiple interferences with high frequencies. As such, the second method, which isolates fault-induced transients from interferences and noise via oscillatory behavior analysis, is then developed to complement the FD-based enveloping approach. Both methods are independent of frequency information and free from prefiltering, hence eliminating the tedious process for filter parameter specification. The in-band vibration interferences can also be suppressed mainly by the second approach. For the nonstationary cases, a prefiltering-free and resampling-free strategy is developed via the joint application of STFD and GD, from which a resampling-free order spectrum can be derived. This order spectrum can effectively reveal not only the existence of a fault but also its location. However, the success of this method relies largely on an effective enveloping technique. To address this matter and at the same time to exploit the advantages of TFA in nonstationary signal analysis, a TFA technique, involving dual demodulations and an iterative process, is developed and innovatively applied to bearing fault identification. The proposed methods have been validated using both simulation and experimental data collected in our lab. The test results have shown that the first two methods can effectively extract fault signatures, remove the interferences (including in-band ones) without prefiltering, and detect fault types from vibration signals for constant speed cases. The last two have shown to be effective in detecting faults and discern fault types from vibration data collected under variable speed conditions without resampling and prefiltering.
46

"Seleção de atributos importantes para a extração de conhecimento de bases de dados" / "Selection of important features for knowledge extraction from data bases"

Huei Diana Lee 16 December 2005 (has links)
O desenvolvimento da tecnologia e a propagação de sistemas computacionais nos mais variados domínios do conhecimento têm contribuído para a geração e o armazenamento de uma quantidade constantemente crescente de dados, em uma velocidade maior da que somos capazes de processar. De um modo geral, a principal razão para o armazenamento dessa enorme quantidade de dados é a utilização deles em benefício da humanidade. Diversas áreas têm se dedicado à pesquisa e a proposta de métodos e processos para tratar esses dados. Um desses processos é a Descoberta de Conhecimento em Bases de Dados, a qual tem como objetivo extrair conhecimento a partir das informações contidas nesses dados. Para alcançar esse objetivo, usualmente são construídos modelos (hipóteses), os quais podem ser gerados com o apoio de diferentes áreas tal como a de Aprendizado de Máquina. A Seleção de Atributos desempenha uma tarefa essencial dentro desse processo, pois representa um problema de fundamental importância em aprendizado de máquina, sendo freqüentemente realizada como uma etapa de pré-processamento. Seu objetivo é selecionar os atributos mais importantes, pois atributos não relevantes e/ou redundantes podem reduzir a precisão e a compreensibilidade das hipóteses induzidas por algoritmos de aprendizado supervisionado. Vários algoritmos para a seleção de atributos relevantes têm sido propostosna literatura. Entretanto, trabalhos recentes têm mostrado que também deve-se levar em conta a redundância para selecionar os atributos importantes, pois os atributos redundantes também afetam a qualidade das hipóteses induzidas. Para selecionar alguns e descartar outros, é preciso determinar a importância dos atributos segundo algum critério. Entre os vários critérios de importância de atributos propostos, alguns estão baseados em medidas de distância, consistência ou informação, enquanto outros são fundamentados em medidas de dependência. Outra questão essencial são as avaliações experimentais, as quais representam um importante instrumento de estimativa de performance de algoritmos de seleção de atributos, visto que não existe análise matemática que permita predizer que algoritmo de seleção de atributos será melhor que outro. Essas comparações entre performance de algoritmos são geralmente realizadas por meio da análise do erro do modelo construído a partir dos subconjuntos de atributos selecionados por esses algoritmos. Contudo, somente a consideração desse parâmetro não é suficiente; outras questões devem ser consideradas, tal como a percentagem de redução da quantidade de atributos desses subconjuntos de atributos selecionados. Neste trabalho é proposto um algoritmo que separa as análises de relevância e de redundância de atributos e introduz a utilização da Dimensão Fractal para tratar atributos redundantes em aprendizado supervisionado. É também proposto um modelo de avaliação de performance de algoritmos de seleção de atributos baseado no erro da hipótese construída e na percentagem de redução da quantidade de atributos selecionados. Resultados experimentais utilizando vários conjuntos de dados e diversos algoritmos consolidados na literatura, que selecionam atributos importantes, mostram que nossa proposta é competitiva com esses algoritmos. Outra questão importante relacionada à extração de conhecimento a partir de bases de dados é o formato no qual os dados estão representados. Usualmente, é necessário que os exemplos estejam descritos no formato atributo-valor. Neste trabalho também propomos um metodologia para dar suporte, por meio de um processo semi-automático, à construção de conjuntos de dados nesse formato, originados de informações de pacientes contidas em laudos médicos que estão descritos em linguagem natural. Esse processo foi aplicado com sucesso a um caso real. / Progress in computer systems and devices applied to a different number of fields, have made it possible to collect and store an increasing amount of data. Moreover, this technological advance enables the storage of a huge amount of data which is difficult to process unless new approaches are used. The main reason to maintain all these data is to use it in a general way for the benefit of humanity. Many areas are engaged in the research and proposal of methods and processes to deal with this growing data. One such process is Knowledge Discovery from Databases, which aims at finding valuable and interesting knowledge which may be hidden inside the data. In order to extract knowledge from data, models (hypothesis) are usually developed supported by many fields such as Machine Learning. Feature Selection plays an important role in this process since it represents a central problem in machine learning and is frequently applied as a data pre-processing step. Its objective is to choose a subset from the original features that describes a data set, according to some importance criterion, by removing irrelevant and/or redundant features, as they may decrease data quality and reduce comprehensibility of hypotheses induced by supervised learning algorithms. Most of the state-of-art feature selection algorithms mainly focus on finding relevant features. However, it has been shown that relevance alone is not sufficient to select important features. Different approaches have been proposed to select features, among them the filter approach. The idea of this approach is to remove features before the model's induction takes place, based on general characteristics from the data set. For the purpose of selecting features and discarding others, it is necessary to measure the features' goodness, and many importance measures have been proposed. Some of them are based on distance measures, consistency of data and information content, while others are founded on dependence measures. As there is no mathematical analysis capable of predicting whether a feature selection algorithm will produce better feature subsets than others, it is important to empirically evaluate the performance of these algorithms. Comparisons among algorithms' performance is usually carried out through the model's error analysis. Nevertheless, this sole parameter is not complete enough, and other issues, such as percentage of the feature's subset reduction should also be taken into account. In this work we propose a filter that decouples features' relevance and redundancy analysis, and introduces the use of Fractal Dimension to deal with redundant features. We also propose a performance evaluation model based on the constructed hypothesis' error and the percentage of reduction obtained from the selected feature subset. Experimental results obtained using well known feature selection algorithms on several data sets show that our proposal is competitive with them. Another important issue related to knowledge extraction from data is the format the data is represented. Usually, it is necessary to describe examples in the so-called attribute-value format. This work also proposes a methodology to support, through a semi-automatic process, the construction of a database in the attribute-value format from patient information contained in medical findings which are described in natural language. This process was successfully applied to a real case.
47

Dimensão fractal em achados histológicos de fígado de ratos wistar intoxicados experimentalmente com veneno de Crotalus Durissus Terrificus / Fractal dimension in liver histological findings of Wistar rats experimentally poisoned with Crotalus durissus terrificus venom

SANTOS, Isabella Keyko Navarro Saneshigue dos 24 November 2017 (has links)
Submitted by Adriana Martinez (amartinez@unoeste.br) on 2018-05-10T22:40:35Z No. of bitstreams: 2 license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Isabella Saneshigue.pdf: 688383 bytes, checksum: dcc746f15532dd8ee811a23185cc4201 (MD5) / Made available in DSpace on 2018-05-10T22:40:35Z (GMT). No. of bitstreams: 2 license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Isabella Saneshigue.pdf: 688383 bytes, checksum: dcc746f15532dd8ee811a23185cc4201 (MD5) Previous issue date: 2017-11-24 / Accidents caused by venom of Crotalus snakes, popularly known in Brazil as rattlesnake, cause the highest number of deaths in humans and animals, mainly due to the great neurotoxic, myotoxic, coagulant, nephrotoxic and hepatotoxic potential of their venom. The present study had the objective of analyzing by histology and fractal dimension liver samples of Wistar rats experimentally poisoned with venom of the snake Crotalus durissus terrificus. The hypothesis is that the venom of Crotalus durissus terrificus is capable of inducing hepatic damage at the dose recommended in this study, that its alterations can be quantified by the fractal dimension and that the antiofidic serum be able to minimize the hepatic lesions induced by the venom. Ninety rats were divided into different groups and treated with: control group (GC, n = 30) 0.9% sodium chloride solution; venom group (GV, n = 30) crotalic venom; (GVS, n = 30) Crotalic venom and antiofidic serum 6 hours after the application of the venom. Liver samples were collected at 2h (M1), 8h (M2) and 24h (M3) after venom administration and submitted to histological analysis and fractal dimension (DF) using the ImageJ® software and box-counting method. Procedures for collecting, processing and analyzing samples were standardized. No significant lesions were observed in GC and GV. Necrosis, cytoplasmic and nuclear vacuolization and absence of inflammatory infiltrate were observed in M2 and M3, whereas in GVS, mononuclear inflammatory infiltrate was evident at all times, in addition to the lesions found in GV. The lesions of necrosis, cytoplasmic and nuclear vacuolization, considered of greater severity were visualized in M3 in both GV and GVS. There was an increase in DF for the same changes in GV and GVS over time, but with no difference between them, but with a significant difference compared to CG. The lesions evidenced in the liver were not minimized by the application of the antiofidic serum. This study agrees with other authors about the hepatotoxicity of crotalic venom in relation to histological findings and the results indicate an increase in FD for the findings of vacuolization and necrosis, proving to be an efficient method for the quantitative evaluation of morphological changes induced by venom without observer interference. In addition, non-protection of the liver by antiofidic serum was evident. It is concluded that Crotalus durissus terrificus venom has hepatotoxic effects; FD is effective in the quantitative morphological evaluation of the liver for vacuolization and necrosis, and antiofidic serum did not protect the liver from venom-induced lesions. / Os acidentes causados pelo veneno das serpentes do gênero Crotalus, conhecidas popularmente no Brasil como cascavel, causam o maior índice de óbitos em seres humanos e animais, principalmente pelo grande potencial neurotóxico, miotóxico, coagulante, nefrotóxico e hepatotóxico do seu veneno. O presente estudo teve por objetivo analisar pela a dimensão fractal e achados histológicos em amostras de fígado de ratos Wistar intoxicados experimentalmente com veneno da serpente Crotalus durissus terrificus. A hipótese é que o veneno da Crotalus durissus terrificus seja capaz de induzir lesão hepática na dose preconizada neste estudo, que suas alterações possam ser quantificadas pela dimensão fractal e que o soro antiofídico seja capaz de minimizar as lesões hepáticas induzidas pelo veneno. Noventa ratos, foram distribuídos em diferentes grupos e tratados com: grupo controle (GC, n=30) solução de cloreto de sódio 0,9%; grupo veneno (GV, n=30) veneno crotálico; grupo veneno/soro antiofídico (GVS, n=30) veneno crotálico e soro antiofídico 6 horas após aplicação do veneno. As amostras do fígado foram realizadas nos momentos 2h (M1), 8h (M2) e 24h (M3) após administração do veneno e submetidas a análise histológica e dimensão fractal (DF) utilizou o software ImageJ® e método de box-counting. Os procedimentos para coletar, processar e analisar as amostras foram padronizados. Não foram observadas lesões significativas no GC e no GV foram evidenciadas necrose, vacuolização citoplasmática e nuclear e ausência de infiltrado inflamatório nos M2 e M3, enquanto que no GVS evidenciou-se infiltrado inflamatório mononuclear em todos os momentos, além das lesões constatadas no GV. As lesões de necrose, vacuolização citoplasmática e nuclear, consideradas de maior severidade foram visualizadas no M3 tanto no GV quanto no GVS. Observou-se uma elevação da DF para as mesmas alterações nos GV e GVS ao longo do tempo, porém sem diferença entre eles, mas com diferença significativa comparado ao GC. As lesões evidenciadas em fígado não foram minimizadas pela aplicação do soro antiofídico. Este estudo concorda com outros autores quanto a hepatotoxicidade do veneno crotálico frente aos achados histológicos e os resultados apontam aumento da DF para os achados de vacuolização e necrose, mostrando ser um método eficaz na avaliação quantitativa de alterações morfológicas induzidas pelo veneno, sem a interferência do observador. Além disso, a não proteção do fígado pelo soro antiofídico foi evidente. Conclui-se que o veneno de Crotalus durissus terrificus apresenta efeitos hepatotóxicos; a DF é eficaz na avaliação morfológica quantitativa do fígado para vacuolização e necrose, e o soro antiofídico não protegeu o fígado de lesões induzidas pelo veneno.
48

Analýza variability srdečního rytmu pomocí fraktální dimenze / Fractal dimension for heart rate variability analysis

Číhal, Martin January 2013 (has links)
This work is focused on fractal dimension utilization for heart rate variability analysis. Both the theory of heart rate variability and the methods of HRV analysis in time domain and using the fractal dimesion are summarized. Short comparsion of time domain and fractal dimension method is presented.
49

Characterizing the Respiration of Stems and Roots of Three Hardwood Tree Species in the Great Smoky Mountains

Rakonczay, Zoltán 14 July 1997 (has links)
Carbon dioxide efflux rates (CER) of stems and roots of overstory and understory black cherry (<i>Prunus serotina</i> Ehrh., BC), red maple (<i>Acer rubrum</i> L., RM) and northern red oak (<i>Quercus rubra</i> L., RO) trees were monitored over two growing seasons at two contrasting sites in the Great Smoky Mountains to investigate diurnal and seasonal patterns in respiration and to develop prediction models based on environmental and plant parameters. CER of small roots (d<0-8 mm) was measured with a newly developed system which allows periodic <i>in situ</i> measurements by using permanently installed flexible cuvettes. Temperature-adjusted CER of roots showed no diel variation. The moderate long-term changes occurred simultaneously in all species and size classes, suggesting that they were driven mostly by environmental factors. Mean root CER ranged from 0.5 to 4.0 nmol g⁻¹ d.w. s⁻¹. Rates were up to six times higher for fine roots (d<2.0 mm) than for coarse roots. CER (per unit length) of boles (d>10 cm) and twigs (d<2 cm) was related to diameter by the function lnCER = a+<i>D</i>·lnd, with <i>D</i> between 1.2 and 1.8. A new, scale-invariant measure of CER, based on <i>D</i>, facilitated comparisons across diameters. Q₁₀ varied with the method of determination, and it was higher in spring (1.8-2.5) than in autumn (1.4-1.5) for all species. Daytime bole CER often fell below temperature-based predictions, likely due to transpiration. The reduction (usually <10%) was less pronounced at the drier site. Twig CER showed more substantial (often >±50%) deviations from the predictions. Deviations were higher in the canopy than in the understory. Mean bole maintenance respiration (at 20°C and d=20 cm) was 0.66, 0.43 and 0.50 μMol m⁻¹, while the volume-based growth coefficient was around 5, 6 and 8 mol cm⁻³ for BC, RM and RO, respectively. In a controlled study, BC and RM seedlings were fumigated in open-top chambers with sub-ambient, ambient and twice-ambient levels of ozone. The twice-ambient treatment reduced stem CER in BC by 50% (p=0.05) in July, but there was no treatment effect in September or in RM. Ozone reduced root/shoot ratio and diameter growth in BC, and P<sub>max</sub> in both species. / Ph. D.
50

Geometric Methods for Simplification and Comparison of Data Sets

Singhal, Kritika 01 October 2020 (has links)
No description available.

Page generated in 0.4863 seconds