• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 55
  • 12
  • 5
  • 5
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 94
  • 94
  • 16
  • 13
  • 13
  • 13
  • 10
  • 9
  • 9
  • 8
  • 7
  • 7
  • 7
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

A Novel Multi-Symbol Curve Fit based CABAC Framework for Hybrid Video Codec's with Improved Coding Efficiency and Throughput

Rapaka, Krishnakanth 21 September 2012 (has links)
Video compression is an essential component of present-day applications and a decisive factor between the success or failure of a business model. There is an ever increasing demand to transmit larger number of superior-quality video channels into the available transmission bandwidth. Consumers are increasingly discerning about the quality and performance of video-based products and there is therefore a strong incentive for continuous improvement in video coding technology for companies to have market edge over its competitors. Even though processor speeds and network bandwidths continue to increase, a better video compression results in a more competitive product. This drive to improve video compression technology has led to a revolution in the last decade. In this thesis we addresses some of these data compression problems in a practical multimedia system that employ Hybrid video coding schemes. Typically Real life video signals show non-stationary statistical behavior. The statistics of these signals largely depend on the video content and the acquisition process. Hybrid video coding schemes like H264/AVC exploits some of the non-stationary characteristics but certainly not all of it. Moreover, higher order statistical dependencies on a syntax element level are mostly neglected in existing video coding schemes. Designing a video coding scheme for a video coder by taking into consideration these typically observed statistical properties, however, offers room for significant improvements in coding efficiency.In this thesis work a new frequency domain curve-fitting compression framework is proposed as an extension to H264 Context Adaptive Binary Arithmetic Coder (CABAC) that achieves better compression efficiency at reduced complexity. The proposed Curve-Fitting extension to H264 CABAC, henceforth called as CF-CABAC, is modularly designed to conveniently fit into existing block based H264 Hybrid video Entropy coding algorithms. Traditionally there have been many proposals in the literature to fuse surfaces/curve fitting with Block-based, Region based, Training-based (VQ, fractals) compression algorithms primarily to exploiting pixel- domain redundancies. Though the compression efficiency of these are expectantly better than DCT transform based compression, but their main drawback is the high computational demand which make the former techniques non-competitive for real-time applications over the latter. The curve fitting techniques proposed so far have been on the pixel domain. The video characteristic on the pixel domain are highly non-stationary making curve fitting techniques not very efficient in terms of video quality, compression ratio and complexity. In this thesis, we explore using curve fitting techniques to Quantized frequency domain coefficients. we fuse this powerful technique to H264 CABAC Entropy coding. Based on some predictable characteristics of Quantized DCT coefficients, a computationally in-expensive curve fitting technique is explored that fits into the existing H264 CABAC framework. Also Due to the lossy nature of video compression and the strong demand for bandwidth and computation resources in a multimedia system, one of the key design issues for video coding is to optimize trade-off among quality (distortion) vs compression (rate) vs complexity. This thesis also briefly studies the existing rate distortion (RD) optimization approaches proposed to video coding for exploring the best RD performance of a video codec. Further, we propose a graph based algorithm for Rate-distortion. optimization of quantized coefficient indices for the proposed CF-CABAC entropy coding.
42

The Determination of Mechanical Properties of Biomedical Materials

Chien, Hui-Lung 29 August 2012 (has links)
The mechanical properties of biomedical materials were determined and discussed in this study. The extension and tensile tests for aorta and coronary artery were carried out using tensile testing machine. Based on incompressibility of biological soft tissue, the stress-stretch curves of arteries were obtained. This study proposed a nonlinear Ogden material model for the numerical simulation of coronary artery extension during stent implantation. The corresponding Ogden model parameters were derived by the obtained stress-stretch curves from tensile tests. For validation, the proposed nonlinear Ogden material model for coronary artery was applied to a Palmaz type stent implantation process. The simulated stent deformation was found to be reasonable. It had a good correlation with the measured results. The microindentation experiments were used to measure the mechanical properties of enamel and dentine of human teeth in this study. To reveal the relation between the experimental parameters and measured mechanical properties, Young¡¦s moduli were investigated by varying experimental parameters. The parameter of maximum indentation load significantly influences measured values. Young¡¦s modulus varies very slightly when 10 to 100 mN of maximum indentation load applied. Young¡¦s modulus is not sensitive to the parameters of portion of unloading data and teeth age. The combination of finite element analysis and curve-fitting method is proposed to determine the mechanical properties of thin film deposited on substrate. The mechanical properties of thin film, i.e. Young¡¦s modulus, yield strength and strain-hardening exponent, were extracted by applying an iterative curve-fitting scheme to the experimental and simulated force-indentation depth curves during the microindentation loading and unloading processes. The variation of mechanical properties of TiN thin films with thicknesses ranging from 0.2 to 1.4 £gm was extracted. The results presented the film thickness effect makes the Young¡¦s modulus of TiN thin films reduces with reducing film thickness, particularly at thicknesses less than 0.8 £gm. Therefore, it can be inferred that a film thickness of 0.8 £gm possibly represents the upper bound when employing macroscopic mechanics with bulk material properties.
43

Investigating the empirical relationship between oceanic properties observable by satellite and the oceanic pCO₂ / Marizelle van der Walt

Van der Walt, Marizelle January 2011 (has links)
In this dissertation, the aim is to investigate the empirical relationship between the partial pressure of CO2 (pCO2) and other ocean variables in the Southern Ocean, by using a small percentage of the available data. CO2 is one of the main greenhouse gases that contributes to global warming and climate change. The concentration of anthropogenic CO2 in the atmosphere, however, would have been much higher if some of it was not absorbed by oceanic and terrestrial sinks. The oceans absorb and release CO2 from and to the atmosphere. Large regions in the Southern Ocean are expected to be a CO2 sink. However, the measurements of CO2 concentrations in the ocean are sparse in the Southern Ocean, and accurate values for the sinks and sources cannot be determined. In addition, it is difficult to develop accurate oceanic and ocean-atmosphere models of the Southern Ocean with the sparse observations of CO2 concentrations in this part of the ocean. In this dissertation classical techniques are investigated to determine the empirical relationship between pCO2 and other oceanic variables using in situ measurements. Additionally, sampling techniques are investigated in order to make a judicious selection of a small percentage of the total available data points in order to develop an accurate empirical relationship. Data from the SANAE49 cruise stretching between Antarctica and Cape Town are used in this dissertation. The complete data set contains 6103 data points. The maximum pCO2 value in this stretch is 436.0 μatm, the minimum is 251.2 μatm and the mean is 360.2 μatm. An empirical relationship is investigated between pCO2 and the variables Temperature (T), chlorophyll-a concentration (Chl), Mixed Layer Depth (MLD) and latitude (Lat). The methods are repeated with latitude included and excluded as variable respectively. D-optimal sampling is used to select a small percentage of the available data for determining the empirical relationship. Least squares optimization is used as one method to determine the empirical relationship. For 200 D-optimally sampled points, the pCO2 prediction with the fourth order equation yields a Root Mean Square (RMS) error of 15.39 μatm (on the estimation of pCO2) with latitude excluded as variable and a RMS error of 8.797 μatm with latitude included as variable. Radial basis function (RBF) interpolation is another method that is used to determine the empirical relationship between the variables. The RBF interpolation with 200 D-optimally sampled points yields a RMS error of 9.617 μatm with latitude excluded as variable and a RMS error of 6.716 μatm with latitude included as variable. Optimal scaling is applied to the variables in the RBF interpolation, yielding a RMS error of 9.012 μatm with latitude excluded as variable and a RMS error of 4.065 μatm with latitude included as variable for 200 D-optimally sampled points. / Thesis (MSc (Applied Mathematics))--North-West University, Potchefstroom Campus, 2012
44

Investigating the empirical relationship between oceanic properties observable by satellite and the oceanic pCO₂ / Marizelle van der Walt

Van der Walt, Marizelle January 2011 (has links)
In this dissertation, the aim is to investigate the empirical relationship between the partial pressure of CO2 (pCO2) and other ocean variables in the Southern Ocean, by using a small percentage of the available data. CO2 is one of the main greenhouse gases that contributes to global warming and climate change. The concentration of anthropogenic CO2 in the atmosphere, however, would have been much higher if some of it was not absorbed by oceanic and terrestrial sinks. The oceans absorb and release CO2 from and to the atmosphere. Large regions in the Southern Ocean are expected to be a CO2 sink. However, the measurements of CO2 concentrations in the ocean are sparse in the Southern Ocean, and accurate values for the sinks and sources cannot be determined. In addition, it is difficult to develop accurate oceanic and ocean-atmosphere models of the Southern Ocean with the sparse observations of CO2 concentrations in this part of the ocean. In this dissertation classical techniques are investigated to determine the empirical relationship between pCO2 and other oceanic variables using in situ measurements. Additionally, sampling techniques are investigated in order to make a judicious selection of a small percentage of the total available data points in order to develop an accurate empirical relationship. Data from the SANAE49 cruise stretching between Antarctica and Cape Town are used in this dissertation. The complete data set contains 6103 data points. The maximum pCO2 value in this stretch is 436.0 μatm, the minimum is 251.2 μatm and the mean is 360.2 μatm. An empirical relationship is investigated between pCO2 and the variables Temperature (T), chlorophyll-a concentration (Chl), Mixed Layer Depth (MLD) and latitude (Lat). The methods are repeated with latitude included and excluded as variable respectively. D-optimal sampling is used to select a small percentage of the available data for determining the empirical relationship. Least squares optimization is used as one method to determine the empirical relationship. For 200 D-optimally sampled points, the pCO2 prediction with the fourth order equation yields a Root Mean Square (RMS) error of 15.39 μatm (on the estimation of pCO2) with latitude excluded as variable and a RMS error of 8.797 μatm with latitude included as variable. Radial basis function (RBF) interpolation is another method that is used to determine the empirical relationship between the variables. The RBF interpolation with 200 D-optimally sampled points yields a RMS error of 9.617 μatm with latitude excluded as variable and a RMS error of 6.716 μatm with latitude included as variable. Optimal scaling is applied to the variables in the RBF interpolation, yielding a RMS error of 9.012 μatm with latitude excluded as variable and a RMS error of 4.065 μatm with latitude included as variable for 200 D-optimally sampled points. / Thesis (MSc (Applied Mathematics))--North-West University, Potchefstroom Campus, 2012
45

A Novel Multi-Symbol Curve Fit based CABAC Framework for Hybrid Video Codec's with Improved Coding Efficiency and Throughput

Rapaka, Krishnakanth 21 September 2012 (has links)
Video compression is an essential component of present-day applications and a decisive factor between the success or failure of a business model. There is an ever increasing demand to transmit larger number of superior-quality video channels into the available transmission bandwidth. Consumers are increasingly discerning about the quality and performance of video-based products and there is therefore a strong incentive for continuous improvement in video coding technology for companies to have market edge over its competitors. Even though processor speeds and network bandwidths continue to increase, a better video compression results in a more competitive product. This drive to improve video compression technology has led to a revolution in the last decade. In this thesis we addresses some of these data compression problems in a practical multimedia system that employ Hybrid video coding schemes. Typically Real life video signals show non-stationary statistical behavior. The statistics of these signals largely depend on the video content and the acquisition process. Hybrid video coding schemes like H264/AVC exploits some of the non-stationary characteristics but certainly not all of it. Moreover, higher order statistical dependencies on a syntax element level are mostly neglected in existing video coding schemes. Designing a video coding scheme for a video coder by taking into consideration these typically observed statistical properties, however, offers room for significant improvements in coding efficiency.In this thesis work a new frequency domain curve-fitting compression framework is proposed as an extension to H264 Context Adaptive Binary Arithmetic Coder (CABAC) that achieves better compression efficiency at reduced complexity. The proposed Curve-Fitting extension to H264 CABAC, henceforth called as CF-CABAC, is modularly designed to conveniently fit into existing block based H264 Hybrid video Entropy coding algorithms. Traditionally there have been many proposals in the literature to fuse surfaces/curve fitting with Block-based, Region based, Training-based (VQ, fractals) compression algorithms primarily to exploiting pixel- domain redundancies. Though the compression efficiency of these are expectantly better than DCT transform based compression, but their main drawback is the high computational demand which make the former techniques non-competitive for real-time applications over the latter. The curve fitting techniques proposed so far have been on the pixel domain. The video characteristic on the pixel domain are highly non-stationary making curve fitting techniques not very efficient in terms of video quality, compression ratio and complexity. In this thesis, we explore using curve fitting techniques to Quantized frequency domain coefficients. we fuse this powerful technique to H264 CABAC Entropy coding. Based on some predictable characteristics of Quantized DCT coefficients, a computationally in-expensive curve fitting technique is explored that fits into the existing H264 CABAC framework. Also Due to the lossy nature of video compression and the strong demand for bandwidth and computation resources in a multimedia system, one of the key design issues for video coding is to optimize trade-off among quality (distortion) vs compression (rate) vs complexity. This thesis also briefly studies the existing rate distortion (RD) optimization approaches proposed to video coding for exploring the best RD performance of a video codec. Further, we propose a graph based algorithm for Rate-distortion. optimization of quantized coefficient indices for the proposed CF-CABAC entropy coding.
46

Powered addition as modelling technique for flow processes

De Wet, Pierre 03 1900 (has links)
Thesis (MSc (Applied Mathematics))--University of Stellenbosch, 2010. / ENGLISH ABSTRACT: The interpretation – and compilation of predictive equations to represent the general trend – of collected data is aided immensely by its graphical representation. Whilst, by and large, predictive equations are more accurate and convenient for use in applications than graphs, the latter is often preferable since it visually illustrates deviations in the data, thereby giving an indication of reliability and the range of validity of the equation. Combination of these two tools – a graph for demonstration and an equation for use – is desirable to ensure optimal understanding. Often, however, the functional dependencies of the dependent variable are only known for large and small values of the independent variable; solutions for intermediate quantities being obscure for various reasons (e.g. narrow band within which the transition from one regime to the other occurs, inadequate knowledge of the physics in this area, etc.). The limiting solutions may be regarded as asymptotic and the powered addition to a power, s, of such asymptotes, f0 and f¥ , leads to a single correlating equation that is applicable over the entire domain of the dependent variable. This procedure circumvents the introduction of ad hoc curve fitting measures for the different regions and subsequent, unwanted jumps in piecewise fitted correlative equations for the dependent variable(s). Approaches to successfully implement the technique for different combinations of asymptotic conditions are discussed. The aforementioned method of powered addition is applied to experimental data and the semblances and discrepancies with literature and analytical models are discussed; the underlying motivation being the aspiration towards establishing a sound modelling framework for analytical and computational predictive measures. The purported procedure is revealed to be highly useful in the summarising and interpretation of experimental data in an elegant and simplistic manner. / AFRIKAANSE OPSOMMING: Die interpretasie – en samestelling van vergelykings om die algemene tendens voor te stel – van versamelde data word onoorsienbaar bygestaan deur die grafiese voorstelling daarvan. Ten spyte daarvan dat vergelykings meer akkuraat en geskik is vir die gebruik in toepassings as grafieke, is laasgenoemde dikwels verskieslik aangesien dit afwykings in die data visueel illustreer en sodoende ’n aanduiding van die betroubaarheid en omvang van geldigheid van die vergelyking bied. ’n Kombinasie van hierdie twee instrumente – ’n grafiek vir demonstrasie en ’n vergelyking vir aanwending – is wenslik om optimale begrip te verseker. Die funksionele afhanklikheid van die afhanklike veranderlike is egter dikwels slegs bekend vir groot en klein waardes van die onafhanklike veranderlike; die oplossings by intermediêre hoeveelhede onduidelik as gevolg van verskeie redes (waaronder, bv. ’n smal band van waardes waarbinne die oorgang tussen prosesse plaasvind, onvoldoende kennis van die fisika in hierdie area, ens.). Beperkende oplossings / vergelykings kan as asimptote beskou word en magsaddisie tot ’n mag, s, van sodanige asimptote, f0 en f¥, lei tot ’n enkel, saamgestelde oplossing wat toepaslik is oor die algehele domein van die onafhanklike veranderlike. Dié prosedure voorkom die instelling van ad hoc passingstegnieke vir die verskillende gebiede en die gevolglike ongewensde spronge in stuksgewyspassende vergelykings van die afhankilke veranderlike(s). Na aanleiding van die moontlike kombinasies van asimptotiese toestande word verskillende benaderings vir die suksesvolle toepassing van hierdie tegniek bespreek. Die bogemelde metode van magsaddisie word toegepas op eksperimentele data en die ooreenkomste en verskille met literatuur en analitiese modelle bespreek; die onderliggend motivering ’n strewe na die daarstelling van ’n modellerings-raamwerk vir analitiese- en rekenaarvoorspellingsmaatreëls. Die voorgestelde prosedure word aangetoon om, op ’n elegante en eenvoudige wyse, hoogs bruikbaar te wees vir die lesing en interpretasie van eksperimentele data.
47

A Point Cloud Approach to Object Slicing for 3D Printing

Oropallo, William Edward, Jr. 20 March 2018 (has links)
Various industries have embraced 3D printing for manufacturing on-demand, custom printed parts. However, 3D printing requires intelligent data processing and algorithms to go from CAD model to machine instructions. One of the most crucial steps in the process is the slicing of the object. Most 3D printers build parts by accumulating material layers by layer. 3D printing software needs to calculate these layers for manufacturing by slicing a model and calculating the intersections. Finding exact solutions of intersections on the original model is mathematically complicated and computationally demanding. A preprocessing stage of tessellation has become the standard practice for slicing models. Calculating intersections with tessellations of the original model is computationally simple but can introduce inaccuracies and errors that can ruin the final print. This dissertation shows that a point cloud approach to preprocessing and slicing models is robust and accurate. The point cloud approach to object slicing avoids the complexities of directly slicing models while evading the error-prone tessellation stage. An algorithm developed for this dissertation generates point clouds and slices models within a tolerance. The algorithm uses the original NURBS model and converts the model into a point cloud, based on layer thickness and accuracy requirements. The algorithm then uses a gridding structure to calculate where intersections happen and fit B-spline curves to those intersections. This algorithm finds accurate intersections and can ignore certain anomalies and error from the modeling process. The primary point evaluation is stable and computationally inexpensive. This algorithm provides an alternative to challenges of both the direct and tessellated slicing methods that have been the focus of the 3D printing industry.
48

Determinação da curva aproximadora pela composição de curvas de Bézier e aplicação do recozimento simulado. / Curve fitting by composition of Bezier curves and simulated annealing

Edson Kenji Ueda 12 February 2015 (has links)
Determinar curvas a partir de uma série da pontos é uma tarefa importante e muito utilizada em CAD. Este trabalho propõe um algoritmo para determinar uma curva aproximadora representada por diversas curvas de Bézier em sequência a partir de uma sequência de pontos. É utilizada uma abordagem de curvas de Bézier por trechos, onde cada trecho possui continuidade C1-fraca. A otimização é feita pelo recozimento simulado com vizinhança adaptativa que minimiza a soma das distâncias de cada ponto da sequência à curva aproximadora e utiliza o comprimento da curva aproximadora como um fator de regularização. Adicionalmente, é utilizado o recozimento simulado multi-objetivo que avalia a influência da soma das distâncias de cada ponto à curva e do comprimento da curva separadamente. Também é feita uma comparação entre a técnica de ajuste de curvas e a técnica de interpolação de curvas. / The task of determining a curve from a set of points is very important in CAD. This work proposes an algorithm to determine a sequence of Bézier curves that approximate a sequence of points. The piecewise Bézier curve is used, where each curve has C1- weak continuity. The optimization is done using the simulated annealing with adaptive neighborhood aiming at minimizing the sum of the distances from each point of the sequence to the generated curve. The length of this curve is used as a regularization factor. In addition, it is used a multi-objective simulated annealing that evaluates the influence of the sum of the distances from each point to the generated curve, and the curves length. It is also done a comparison between curve fitting and curve interpolation techniques.
49

Estimação não parametrica da trajetoria percorrrida por um veiculo autonomo / Non-parametric curve estimation of an autonomous vehicle trajectory

Zambom, Adriano Zanin, 1982- 03 June 2008 (has links)
Orientador: Nancy Lopes Garcia / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Matematica, Estatistica e Computação Cientifica / Made available in DSpace on 2018-08-10T10:07:23Z (GMT). No. of bitstreams: 1 Zambom_AdrianoZanin_M.pdf: 5665938 bytes, checksum: c420b9bb8c5b861a4f0fb73c3bee30c1 (MD5) Previous issue date: 2008 / Resumo: O objetivo deste estudo 'e encontrar a melhor trajetória para um veiculo autônomo que tem que se locomover de um ponto A 'a um ponto B na menor distancia possível evitando os possíveis obstáculos fixos entre esses pontos. Além disso, assumimos que existe uma distância segura r para ser mantida entre o veículo e os obstáculos. A locomoção do veículo não 'e fácil, isto 'e, o veículo não pode fazer movimentos abruptos e a trajetória tem que seguir uma curva suave. Obviamente, se não ha obstáculos, a melhor rota é uma linha reta entre A e B. Neste trabalho propomos um método não paramétrico de encontrar o melhor caminho. Se ha erro de medida, um estimador estocástico consistente 'e proposto no sentido de que quando o numero de observações aumenta, a trajetória estocástica converge para a determinística / Abstract: The objective of this study is to find a smooth function joining two points A and B with minimum length constrained to avoid fixed subsets. A penalized nonparametric method of finding the best path is proposed. The method is generalized to the situation where stochastic measurement errors are present. In this case, the proposed estimator is consistent, in the sense that as the number of observations increases the stochastic trajectory converges to the deterministic one. Two applications are immediate, searching the optimal path for an autonomous vehicle while avoiding all fixed obstacles between two points and flight planning to avoid threat or turbulence zones / Mestrado / Mestre em Estatística
50

Modelo de von Bertalanffy generalizado aplicado a curvas de crescimento animal / Generalized von Bertalanffy model applied to growth animal curves

Scapim, Juliana 25 March 2008 (has links)
Orientador: Rodney Carlos Bassanezi / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Matematica, Estatistica e Computação Cientifica / Made available in DSpace on 2018-08-10T20:03:26Z (GMT). No. of bitstreams: 1 Scapim_Juliana_M.pdf: 1369046 bytes, checksum: 38fa2302ec3e5f8a2e726022558f1860 (MD5) Previous issue date: 2008 / Resumo: Objetivamos com o presente trabalho estudar a curva de crescimento em peso de diversos espécimes animais, a fim de estabelecer padrões de metabolismo por classe animal. Ajustamos os parâmetros do modelo não-linear de von Bertalanffy generalizado aplicado às curvas de crescimento para identificar quais parâmetros descrevem melhor o crescimento do animal nas faixas de idade-peso fornecidas. Utilizamos no estudo dados de idade e peso de 19 espécimes, entre mamíferos, aves, anfíbios, peixes~' crustáceos, vermes e insetos. Os ajustes, feitos através de experimentos computacionais, utilizam como ferramenta de apoio o programa MATLAB. Uma vez que a principal característica de sistemas determinísticos é a precisão obtida pela solução e ao lidar com modelos de crescimento animal, naturalmente informações imprecisas fazem parte da modelagem, comprometendo esta precisão, optamos por concluir os resultados com o apoio da Teoria dos Conjuntos Fuzzy, que permite trabalhar com este tipo de incerteza de maneirÇt razoável. A conjectura inicial de que haveria um padrão de metabolismo por classe animal não se verificou, ocorrendo, inclusive, distinção entre machos e fêmeas de uma mesma espécie, como é o caso do peru. Hipoteticamente, a perda de energia está relacionada com os hábitos de cada animal / Abstract: The objective of this work is to study the curve of growth in weight of several animaIs specimens, to establish standards of metabolism for animal class. The parameters of the non-linear generalized form of the von Bertalanffy mo deI applied to weight growth curves were adjusted to identify with ones describe better the growth of the animal on the areas of age-weight provides. We utilized on the study data of age and weight of 19 specimens, mammals, birds, amphibians, fishes, crustaceous; vermin's and insects. These adjusts were made through computational experiments, using as tool of suppàrt the program MATLAB. Once that the principal characteristic of the deterministic systems is the precision obtain by the solution and in dealing with models of animal growth, naturally imprecision information's make part of the modeling, compromising this precision, we opt by conclude the results with the sup . port of Fuzzy Sets Theory, with allow work with this kind of uncertainness in a reasonable way. The initial conjecture that should be one standard of metabolism for animal class was not verify, occurring, including, distinction between males and females of the same species, like the case of the turkey. Hypothetically, the lost of energy is related with the habits of each animal / Mestrado / Biomatematica / Mestre em Matemática Aplicada

Page generated in 0.0462 seconds