Spelling suggestions: "subject:"geometric mean"" "subject:"geometric jean""
11 |
Quantifying biodiversity trends in time and spaceStudeny, Angelika C. January 2012 (has links)
The global loss of biodiversity calls for robust large-scale diversity assessment. Biological diversity is a multi-faceted concept; defined as the “variety of life”, answering questions such as “How much is there?” or more precisely “Have we succeeded in reducing the rate of its decline?” is not straightforward. While various aspects of biodiversity give rise to numerous ways of quantification, we focus on temporal (and spatial) trends and their changes in species diversity. Traditional diversity indices summarise information contained in the species abundance distribution, i.e. each species' proportional contribution to total abundance. Estimated from data, these indices can be biased if variation in detection probability is ignored. We discuss differences between diversity indices and demonstrate possible adjustments for detectability. Additionally, most indices focus on the most abundant species in ecological communities. We introduce a new set of diversity measures, based on a family of goodness-of-fit statistics. A function of a free parameter, this family allows us to vary the sensitivity of these measures to dominance and rarity of species. Their performance is studied by assessing temporal trends in diversity for five communities of British breeding birds based on 14 years of survey data, where they are applied alongside the current headline index, a geometric mean of relative abundances. Revealing the contributions of both rare and common species to biodiversity trends, these "goodness-of-fit" measures provide novel insights into how ecological communities change over time. Biodiversity is not only subject to temporal changes, but it also varies across space. We take first steps towards estimating spatial diversity trends. Finally, processes maintaining biodiversity act locally, at specific spatial scales. Contrary to abundance-based summary statistics, spatial characteristics of ecological communities may distinguish these processes. We suggest a generalisation to a spatial summary, the cross-pair overlap distribution, to render it more flexible to spatial scale.
|
12 |
Surface Free Energy Evaluation, Plasma Surface Modification And Biocompatibility Studies Of PmmaOzcan, Canturk 01 August 2006 (has links) (PDF)
PMMA is a widely used biomaterial especially in the fields of orthopedia, orthodontia and ophthalmology. When biocompatibility is considered, modification of the biomaterials& / #8217 / surface may be needed to optimize interactions of the biomaterial with the biological environment. After the surface modifications one of the most important changes that occur is the change in the surface free energy (SFE). SFE is an important but an obscure property of the material and evaluation methods with different assumptions exist in the literature. In this study, SFE of pristine and oxygen plasma modified PMMA films were calculated by means of numerous theoretical approaches (Zisman, Saito, Fowkes, Berthelot, Geometric and Harmonic Mean and Acid-Base) using numerous liquids and the results were compared to each other to elucidate the differences of methods. Dispersive, polar, acidic and basic components of the SFE were calculated by the use of different liquid couples and triplets with the application of Geometric and Harmonic mean methods and Acid-Base approach. The effect of SFE and the components of SFE on the cell attachment efficiencies were examined by using fibroblast cells. It was observed that with the treatment of oxygen plasma, cell attachment capability and hydrophilicity of PMMA surfaces were altered depending on the applied power and duration of the plasma.
|
13 |
Multi-objective optimization in learn to pre-compute evidence fusion to obtain high quality compressed web search indexesPal, Anibrata 19 April 2016 (has links)
Submitted by Sáboia Nágila (nagila.saboia01@gmail.com) on 2016-07-29T14:09:40Z
No. of bitstreams: 1
Disertação-Anibrata Pal.pdf: 1139751 bytes, checksum: a29e1923e75e239365abac2dc74c7f40 (MD5) / Approved for entry into archive by Divisão de Documentação/BC Biblioteca Central (ddbc@ufam.edu.br) on 2016-08-15T17:54:46Z (GMT) No. of bitstreams: 1
Disertação-Anibrata Pal.pdf: 1139751 bytes, checksum: a29e1923e75e239365abac2dc74c7f40 (MD5) / Approved for entry into archive by Divisão de Documentação/BC Biblioteca Central (ddbc@ufam.edu.br) on 2016-08-15T17:57:29Z (GMT) No. of bitstreams: 1
Disertação-Anibrata Pal.pdf: 1139751 bytes, checksum: a29e1923e75e239365abac2dc74c7f40 (MD5) / Made available in DSpace on 2016-08-15T17:57:29Z (GMT). No. of bitstreams: 1
Disertação-Anibrata Pal.pdf: 1139751 bytes, checksum: a29e1923e75e239365abac2dc74c7f40 (MD5)
Previous issue date: 2016-04-19 / CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / The world of information retrieval revolves around web search engines. Text search engines
are one of the most important source for routing information. The web search
engines index huge volumes of data and handles billions of documents. The learn to rank
methods have been adopted in the recent past to generate high quality answers for the
search engines. The ultimate goal of these systems are to provide high quality results
and, at the same time, reduce the computational time for query processing. Drawing direct
correlation from the aforementioned fact; reading from smaller or compact indexes
always accelerate data read or in other words, reduce computational time during query
processing.
In this thesis we study about using learning to rank method to not only produce high
quality ranking of search results, but also to optimize another important aspect of search
systems, the compression achieved in their indexes. We show that it is possible to achieve
impressive gains in search engine index compression with virtually no loss in the final
quality of results by using simple, yet effective, multi objective optimization techniques
in the learning process. We also used basic pruning techniques to find out the impact of
pruning in the compression of indexes. In our best approach, we were able to achieve
more than 40% compression of the existing index, while keeping the quality of results at
par with methods that disregard compression. / Máquinas de busca web para a web indexam grandes volumes de dados, lidando com
coleções que muitas vezes são compostas por dezenas de bilhões de documentos. Métodos
aprendizagem de máquina têm sido adotados para gerar as respostas de alta qualidade
nesses sistemas e, mais recentemente, há métodos de aprendizagem de máquina propostos
para a fusão de evidências durante o processo de indexação das bases de dados. Estes
métodos servem então não somente para melhorar a qualidade de respostas em sistemas de
busca, mas também para reduzir custos de processamento de consultas. O único método
de fusão de evidências em tempo de indexação proposto na literatura tem como foco exclusivamente
o aprendizado de funções de fusão de evidências que gerem bons resultados
durante o processamento de consulta, buscando otimizar este único objetivo no processo
de aprendizagem.
O presente trabalho apresenta uma proposta onde utiliza-se o método de aprendizagem
com múltiplos objetivos, visando otimizar, ao mesmo tempo, tanto a qualidade de
respostas produzidas quando o grau de compressão do índice produzido pela fusão de
rankings. Os resultados apresentados indicam que a adoção de um processo de aprendizagem
com múltiplos objetivos permite que se obtenha melhora significativa na compressão
dos índices produzidos sem que haja perda significativa na qualidade final do ranking
produzido pelo sistema.
|
14 |
Efektivní algoritmy pro vysoce přesný výpočet elementárních funkcí / Effective Algorithms for High-Precision Computation of Elementary FunctionsChaloupka, Jan January 2013 (has links)
Nowadays high-precision computations are still more desired. Either for simulation on a level of atoms where every digit is important and inaccurary in computation can cause invalid result or numerical approximations in partial differential equations solving where a small deviation causes a result to be useless. The computations are carried over data types with precision of order hundred to thousand digits, or even more. This creates pressure on time complexity of problem solving and so it is essential to find very efficient methods for computation. Every complex physical problem is usually described by a system of equations frequently containing elementary functions like sinus, cosines or exponentials. The aim of the work is to design and implement methods that for a given precision, arbitrary elementary function and a point compute its value in the most efficent way. The core of the work is an application of methods based on AGM (arithmetic-geometric mean) with a time complexity of order $O(M(n)\log_2{n})$ 9(expresed for multiplication $M(n)$). The complexity can not be improved. There are many libraries supporting multi-precision atithmetic, one of which is GMP and is about to be used for efficent method implementation. In the end all implemented methods are compared with existing ones.
|
15 |
The production of a lyotropic liquid crystal coated powder precursor through twin screw extrusion.Likhar, Lokesh January 2013 (has links)
The twin screw extrusion technique has been explored to produce lyotropic liquid crystal coated powder precursor by exploiting Pluronic F127 thermoreversible gelation property to get powder precursor without granular aggregates or with less compacted granular aggregates. The highly soluble chlorpheniramine maleate loaded in Pluronic F127 solution coated MCC particles prepared through twin screw extrusion was examined to produce the cubic phase (gel) for the development of controlled release formulation and for coating of very fine particles which cannot be achieved by traditional bead coaters. Controlled release formulations are beneficial in reducing the frequency of administration of highly soluble drugs having short half life and also to address the problem of polypharmacy in old age patients by reduction of dosage frequency. An unusual refrigerated temperature (5 C) profile for twin screw extrusion was selected based on the complex viscoelastic flow behaviour of Pluronic F127 solution which was found to be highly temperature sensitive. The Pluronic F127 solution was found to be Newtonian in flow and less viscoelastic at low temperature, such that low temperature (5 C) conditions were found to be suitable for mixing and coating the MCC particles to avoid compacted aggregates. At higher temperatures (35-40 C) Pluronic F127 solution exhibited shear thinning and prominent viscoelasticity, properties which were exploited to force CPM containing Pluronic F127 solution to stick over the MCC surface. This was achieved by elevating the temperature of the last zone of the extrusion barrel. It was found that to avoid compacted aggregates the MCC must be five times the weight of the Pluronic F127 solution and processed at a screw speed of 400 RPM or above at refrigerated temperature. Processing was not found to be smooth at ambient temperature with frictional heat and high torque generation due to significant compaction of coated particles which can be attributed to the elastic behaviour of Pluronic F127 solution at temperatures between ambient to typical body temperature. PLM images confirmed the cubic phase formation (gel) by Pluronic F127 coating which was found to be thick with maximum Pluronic F127 concentration (25%). SEM images showed smoothing of surface topography, and stretching and elongation of MCC fibres after extrusion which is indicative of coating through extrusion processing. Plastic deformation was observed for the lower Pluronic F127 concentration and higher MCC proportions. There was a significant decrease in work done for cohesion by the powder flow analyser observed in the batches with more aggregates compared with batches with least aggregates. A regression analysis study on factorial design batches was conducted to investigate the significant independent variables and their impact on dependent variables for example % torque, geometric mean diameter and work done for cohesion, and to quantitatively evaluate them. From the regression analysis data it was found that the coefficient of determination for all three dependent variables was in the range of 55-62%. The pharmaceutical performance of the prepared coated LLC precursor through twin screw extrusion in terms of controlled release was found to be very disappointing. Almost 100% chlorpheniramine maleate was released within 10-15mins, defined as providing burst release. The MDSC method was developed within this work to detect Pluronic F127 solution cubic phase formation. The MDSC method was developed to consider sample size, effect of heating and cooling, sample heat capacity, and the parameters for highest sensitivity which can be followed by sample accurately without the phase lag to produce accurate repeatable results.
|
16 |
台灣股票市場的長期超額報酬與股票風險溢酬值 / The Equity Excess Return and Risk Premium of Taiwan Stock Market簡瑞璞, Chien, Dennis Jui-Pu Unknown Date (has links)
已實現投資報酬率與無風險利率之差、被稱為超額報酬,而股票的預期報酬率超過無風險利率的部份則為股票風險溢酬,是許多資產評價模型的重要依據,例如資本資產定價模型。有不同的理論架構解釋說明風險溢酬值,例如;股票風險溢酬的迷思、短期損失的憎惡、生還存留因素和回歸與偏離平均值等等。
研究台灣股市的超額報酬與股票風險溢酬,有助投資大眾和企業理性面對股市的預期報酬和風險,對台股才有合理的期望報酬值。分析1967年迄2003年的台灣金融市場,計算過去37年長期的幾何平均年報酬率,以臺灣證券交易所發行量加權股價指數為台股市場報酬率,已實現台股實質年報酬率為6.71%。無風險報酬率使用第一銀行的一年期定期存款利率,實質台幣存款年利率為3.07%,消費者物價指數年增率則為4.80%。以年資料計算的台股實質超額報酬,算術和幾何值分別為12.48%和3.63%(年),計算月資料算術平均和幾何平均值分別為0.77%和0.25%(月)。過去37年長期的台股超額報酬現象未較歐美市場的情況更加明顯,也比一般市場的預期報酬率低。
因資料取得的限制、台股的理論超額報酬方面,1991年迄2003年的近十三年來,經固定股利成長模式和盈餘成長模式的兩種計算方式,台股的實質超額報酬分別為 0.6%和-4.3%,此時期台股的投資報酬率比起台幣存款並不突出、且是低超額報酬。同期的已實現的實質超額報酬值;算術平均1.69%和幾何平均-3.35%。評估目前台股風險溢酬,將十分接近過去37年長期歷史資料得到的超額報酬數值,算術年均值為12.48%(年)和0.77%(月),幾何平均分別為3.63%(年)和0.25%(月),低風險溢酬是當前台灣股票市場的一般現象。 / The difference between the observed historical investment return and the risk-free interest rate is the excess return. The equity risk premium, ERP is the expected rate of return on the aggregate stock market in excess of the rate of risk-free security. ERP is one of important factor of many asset-pricing models, including Capital Asset Pricing Model, CAPM. There were many theories and factors to explain the equity risk premium; equity premium puzzle, myopic loss aversion, survivorship bias, mean reversion & aversion and etc.
Studying the value of Taiwan equity excess return and risk premium is fundamental for investors and institutions evaluating the expected market investment return and risk. Analyzing the data from year 1967 to 2003 for thirty-seven years long holding period, Taiwan Stock Exchange Capitalization Weighted Stock Index as Taiwan stock market return, the realized real return was 6.71%. One-year bank time deposit rate as NT dollars risk-free asset rate and real interest rate was 3.07% and consumer price index, CPI annual growth rate was 4.80%. The historical real yearly excess return was 12.45% for arithmetic mean and 3.63% geometric mean; the historical real monthly excess return was 0.77% for arithmetic mean and 0.25% geometric mean. Taiwan realized equity excess returns were not higher than the returns in the developed countries and were also lower than the market's expectation.
Due to the limits of available data, the theoretical equity excess returns that were calculated on two theoretical models; Constant Growth Dividend Discount Model (dividend yield model) and earnings yield model were 0.6% and -4.3% from year 1991 to year 2003. Comparing the same period of historical realized excess returns of 1.69% for arithmetic mean and -3.35% geometric mean, Taiwan stock market returns were not spectacular. The current equity risk premium of Taiwan stock market is low and should be near the level of the long historical realized equity excess return.
|
Page generated in 0.0526 seconds