Spelling suggestions: "subject:"index"" "subject:"índex""
241 |
Estimativa do viés de substituição na inflação ao consumidor e seu impacto na previdência / Estimate of the substitution bias in consumer inflation and its effect on the social security systemRojas, Andres Francisco Medeyros 23 April 2008 (has links)
O objetivo deste estudo é estimar o viés de substituição de produtos no cálculo da inflação ao consumidor, ou seja, estimar a inflação levando em conta a possibilidade da troca de bens dentro de uma cesta de produtos em resposta à mudança de preços relativos. Isso ocorre porque a fórmula utilizada atualmente pelo IBGE, tanto para o INPC quanto para o IPCA, para medir a inflação ao consumidor é a de Laspeyres modificado base móvel (índice do Bureau), que considera a mesma cesta de bens e serviços ao longo do tempo. Este índice tende a superestimar o aumento do custo de vida justamente por não considerar as trocas. Seguindo trabalhos anteriores, a estimação do viés se deu comparando um índice Laspeyres para um subconjunto do IPCA com a inflação mensurada pelo índice de Theil-Tornqvist para o mesmo subconjunto de produtos. Este índice se aproxima de um índice de custo de vida, logo, que considera a substituição de bens. No entanto, ele necessita atualizações freqüentes das cestas de bens e serviços ou das estruturas de ponderação. Como não existem no Brasil pesquisas de consumo das famílias que forneça estruturas de ponderações periódicas, estas tiveram que ser estimadas. Para tanto, foram utilizadas previsões de um modelo de sistema de demanda AIDS baseado nos microdados da POF 95-96. O viés de substituição estimado foi de 3,33 p.p. de agosto de 1999 a junho de 2006, o que equivale a dizer que a inflação ao consumidor foi superestimada em 0,31 p.p. ao ano. Pela impossibilidade de trabalhar com o nível mais desagregado do IPCA (o subitem), certamente, o viés calculado é subestimado. Caso o viés estimado fosse descontado dos reajustes dados às aposentadorias, pensões e demais auxílios concedidos pelo Ministério da Previdência e Assistência Social, o governo poderia ter poupado de 2000 a junho de 2006, aproximadamente, R$ 8 bilhões. / The objective of this study is to estimate the substitution of products bias in the calculation of consumer inflation, therefore, estimate the inflation taking into account the possibility of switching goods in a basket of products, in response to a change in relative prices. This happens because the estimation formula used by IBGE, both with INPC an IPCA, to measure consumer inflation is Laspeyres (Bureau\'s index), witch considers the same basket of goods over time. This index tends to overestimate the increase in the living cost, by not taking into account the substitution of products. Following previous works, the estimation of the bias was made comparing a Lapeyres index for a subgroups of IPCA with the inflation measure by the Theil-Tornqvist index for the same subgroups. This gets closer to an index of cost of living, which considers the substitution of goods. However, it needs frequent updates of the baskets of goods and services or of the weighted structures. As there are no surveys of family consumption in Brazil that provide periodic weighted structures, these had to be estimated. To do it, were used micro data of POF 95-96. The substitution bias estimated was 3,33 p.p of August 1999 to June 2006, which is equivalent of saying that the consumer inflation was overestimated in 0,31 p.p per year. With the impossibility of working with a more highly disaggregated level of IPCA (the sub items), certainly the calculated bias was underestimated. If the bias estimated was discounted from adjustment given to retirement, pensions and other benefits granted by the Ministry of Welfare and Social Assistance, the government could have saved, from 2000 to June 2006, approximately R$ 8 billions.
|
242 |
Impact of extensive green roofs on energy performance of school buildings in four North American climatesMahmoodzadeh, Milad 31 May 2018 (has links)
Buildings are one of the major consumers of energy and make up a considerable portion in the generation of greenhouse gases. Green roofs are regarded as an appropriate strategy to reduce the heating and cooling loads in buildings. However, their energy performance is influenced by different design parameters which should be optimized based on the corresponding climate zone. Previous investigations mainly analyzed various design parameters in a single climate zone. However, the interaction of parameters in different climate zones was not considered. Also, the studies have been conducted mostly for commercial or residential buildings. Among different building types, schools with large roof surface are one of the major consumers of energy in North America. However, the literature review shows the lack of study on the effect of green roof on the thermal and energy performance of this type of building. This study performs a comprehensive parametric analysis to evaluate the influence of the green roof design parameters on the thermal or energy performance of a secondary school building in four climate zones in North America (i.e. Toronto, ON; Vancouver, BC; Las Vegas, NV and Miami, FL). Soil moisture content, soil thermal properties, leaf area index, plant height, leaf albedo, thermal insulation thickness and soil thickness were used as variables. Optimal parameters of green roofs were found to be closely related to meteorological conditions in each city. In terms of energy savings, the results show that the light substrate has better thermal performance for the uninsulated green roof. Also, the recommended soil thickness and leaf area index in the four cities are 0.15 m and 5, respectively. The optimal plant height for the cooling dominated climates is 0.3 m and for the heating dominated cities are 0.1 m. The plant albedo had the least impact on the energy consumption while it is effective in mitigation effect of heat island effect. Finally, unlike the cooling load which is largely influenced by the substrate and vegetation, the heating load is considerably affected by the thermal insulation instead of green roof design parameters. / Graduate
|
243 |
Development of a dryland corn productivity index for KansasBush, Erin January 1900 (has links)
Master of Science / Department of Agronomy / Michel D. Ransom / For many decades, researchers have created indices to rate soil on its ability to produce vegetative growth. The Soil Rating for Plant Growth (SRPG) model was developed by USDA-Natural Resources Conservation Service (NRCS) in 1992 to array soil mapping units relative to their potential to produce dryland commodity crops independent of management. A few years later, the Kansas Department of Revenue (KDR) Property Valuation Division (PVD) began using the SRPG model for land valuation. Since then, the SRPG was updated to a Kansas-specific model, KS-SRPG, later renamed and modified to PRGM-General Crop Production Index (GCPI), and stored in the National Soil Information System (NASIS). In 2003, modifications were made to the GCPI model to develop an irrigated index for Kansas and was termed the Kansas Irrigated Productivity Index (KIPI). KS-SRPG and KIPI are still used by the PVD, but are no longer updated, are not available to the public, and are difficult to understand. Therefore, it is necessary to construct a new model to predict dryland corn productivity for Kansas soil mapping units. This thesis calibrated and validated a new dryland corn index, which is termed the Kansas Commodity Crop Productivity Index (KCCPI) corn submodel. The KCCPI model was built in NASIS with the goal of being available to the public on Web Soil Survey. Corn yield data in NASIS were used to calibrate the model during development. Dryland corn yield data were obtained from Risk Management Agency (RMA) by Common Land Unit (CLU) and regressed against KCCPI for validation. Results during calibration were promising, but KCCPI was not as successful during validation. This suggests that more work needs to be done to the model with more sets of yield data.
|
244 |
Avaliação da medida h-index na ordenação de resultados na blogosferaCosta, Tiago Valente da January 2009 (has links)
Tese de mestrado integrado. Engenharia Electrotécnica e de Computadores (Major Telecomunicações). Faculdade de Engenharia. Universidade do Porto. 2009
|
245 |
Application of Image Recognition Technology to Foraminiferal Assemblage AnalysesGfatter, Christian Helmut 12 October 2018 (has links)
Analyses of foraminiferal assemblages involve time consuming microscopic assessment of sediment samples. Image recognition software, which systematically matches features within sample images against an image library, is widely used in contexts ranging from law enforcement to medical research. At present, scientific applications such as identification of specimens in plankton samples utilize flow through systems in which samples are suspended in liquid and pass through a beam of light where the images are captured using transmitted light. Identification of foraminifers generally utilizes reflected light, because most shells are relatively opaque.
My goal was to design and test a protocol to directly image foraminiferal specimens using reflected light and then apply recognition software to those images. A library of high quality digital images was established by photographing foraminifers identified conventionally from sediment samples from the west Florida shelf. Recognition software, VisualSpreadsheet™ by Fluid Imaging Technologies, Inc., was then trained to improve automated assemblage counts and those results were compared to results from direct visual assessment. The auto classification feature produced composite accuracies of foraminiferal groups in the range of 60–70% compared to traditional visual identification by a researcher using a stereo microscope. Site SC34, the source of images for the original image library, had an initial accuracy of 75% that was improved slightly through an alteration to one of the software classes, but composite accuracy plateaued at 60% with the updated filters. Thus, image acquisition advancements and further development of image recognition software will be required to improve automated or semi automated foraminiferal classifications. However, other potential applications were noted. For example, an advantage of acquiring digital images of entire samples or subsamples is the ability to collect quantitative data such as diameter and length, allowing size-frequency assessments of foraminiferal populations while possibly automating grain size analyses without requiring separate processing. In addition, data files of library and sample specimens can be readily shared with other researchers.
|
246 |
Why we should stop using the Kogut-Singh-IndexKonara, Palitha, Mohr, Alexander 06 1900 (has links) (PDF)
The Kogut and Singh (1988) index is the most widely used construct to measure cultural distance in international business and management research. We show that this index is incorrectly specified and captures the squared cultural distance. This inaccuracy is problematic because it means that the empirical findings on the effects of cultural distance presented in different strands of international business research are likely to be misleading. We specify the correct form of the distance measure based on the Euclidean distance formula and demonstrate the implications of using the incorrectly specified Kogut and Singh (1988) index.
|
247 |
Ethnic and cultural influences on body composition, lifestyle and body image among malesKagawa, Masaharu January 2004 (has links)
The aim of this research was to determine ethnic and cultural influences on body composition, lifestyle, and aspects of body image (perception, acceptability, and satisfaction) of younger (age 18-40 years) Australian and Japanese males, the latter including groups living in Australia and Japan. The sample sizes of the three groups were 68 Japanese living in Australia, 84 Japanese living in Japan, and 72 Australian Caucasian males respectively. The methodology included body composition assessments (by anthropometry and DXA), lifestyle and body image questionnaires, and dietary records. The study found significant p<0.05) ethnic differences in the %BF at given BMI levels and for Japanese the BMI values of 23.6kg/m2 and 28.6kg/m2 were found to be equivalent to 25 and 30 for Caucasians when used to classify individuals as "overweight" and "obese". Equations in common use for the calculation of body composition in Japanese males were evaluated using modern methods of body composition assessment and found to need considerable modification. New regression equations that represent BMI-%BF relationships for Japanese and Australians were proposed: Japanese: Log %BF = -1.330 + 1.896(log BMI), (R2 = 0.547, SEE = 0.09); Australians: Log %BF = -1.522 + 2.001(log BMI), (R2 = 0.544, SEE = 0.10). Equations were also developed to predict %BF for Japanese and Australian males from body composition assessments using anthropometry and DXA: Japanese: %BF = 0.376 + 0.402(abdominal) + 0.772(medial calf) + 0.217(age), (R2 = 0.786, SEE = 2.69); Australians: %BF = 2.184 + 0.392(medial calf) + 0.678(supraspinale) + 0.467(triceps), (R2 = 0.864, SEE = 2.37). Lifestyle factors were found to influence perceptions of body image. / Australian males participate in physical activity more frequently than their Japanese counterparts (Australians = 98.6% involved in vigorous activity at least once per week, Japanese living in Japan = 85.7%, Japanese living in Australia = 72.1%). Significant differences p<0.05) in energy contribution patterns were found between the Japanese group (Protein: 14.4%, Carbohydrate: 50.4%, Fat: 28.1%) and Japanese living in Australia (JA: Protein: 16.3%, Carbohydrate: 47.3%, Fat: 32.3%) and the Australians (Protein: 17.1%, Carbohydrate: 47.9%, Fat: 30.6%). This shows that the Japanese living in Australia have adopted a more westemised diet than those living in Japan. Body Image assessments were done on all study groups using the Somatomorphic Matrix (SM) computer program and questionnaires, including the Ben-Tovim Walker Body Attitudes Questionnaires, (BAQ) the Attention to the Body Shape Scale (ABS), and the Eating Attitudes Test (EAT). Japanese males tended to overestimate their weight and amount of body fat, while Australian Caucasian males underestimated these parameters. The Japanese groups had higher scores on the selfdisparagement subscale and lower scores on the strengths and the attractiveness subscales of the BAQ questionnaire than Australian males. Australian males also had higher scores on the EAT total score and the dieting subscale of the EAT questionnaire than Japanese males. When all groups of subjects selected their perceived body image from the SM program menu, these results had no relationship with measured body composition values, suggesting that further development of this program is needed for use in these populations.
|
248 |
Effective web crawlersAli, Halil, hali@cs.rmit.edu.au January 2008 (has links)
Web crawlers are the component of a search engine that must traverse the Web, gathering documents in a local repository for indexing by a search engine so that they can be ranked by their relevance to user queries. Whenever data is replicated in an autonomously updated environment, there are issues with maintaining up-to-date copies of documents. When documents are retrieved by a crawler and have subsequently been altered on the Web, the effect is an inconsistency in user search results. While the impact depends on the type and volume of change, many existing algorithms do not take the degree of change into consideration, instead using simple measures that consider any change as significant. Furthermore, many crawler evaluation metrics do not consider index freshness or the amount of impact that crawling algorithms have on user results. Most of the existing work makes assumptions about the change rate of documents on the Web, or relies on the availability of a long history of change. Our work investigates approaches to improving index consistency: detecting meaningful change, measuring the impact of a crawl on collection freshness from a user perspective, developing a framework for evaluating crawler performance, determining the effectiveness of stateless crawl ordering schemes, and proposing and evaluating the effectiveness of a dynamic crawl approach. Our work is concerned specifically with cases where there is little or no past change statistics with which predictions can be made. Our work analyses different measures of change and introduces a novel approach to measuring the impact of recrawl schemes on search engine users. Our schemes detect important changes that affect user results. Other well-known and widely used schemes have to retrieve around twice the data to achieve the same effectiveness as our schemes. Furthermore, while many studies have assumed that the Web changes according to a model, our experimental results are based on real web documents. We analyse various stateless crawl ordering schemes that have no past change statistics with which to predict which documents will change, none of which, to our knowledge, has been tested to determine effectiveness in crawling changed documents. We empirically show that the effectiveness of these schemes depends on the topology and dynamics of the domain crawled and that no one static crawl ordering scheme can effectively maintain freshness, motivating our work on dynamic approaches. We present our novel approach to maintaining freshness, which uses the anchor text linking documents to determine the likelihood of a document changing, based on statistics gathered during the current crawl. We show that this scheme is highly effective when combined with existing stateless schemes. When we combine our scheme with PageRank, our approach allows the crawler to improve both freshness and quality of a collection. Our scheme improves freshness regardless of which stateless scheme it is used in conjunction with, since it uses both positive and negative reinforcement to determine which document to retrieve. Finally, we present the design and implementation of Lara, our own distributed crawler, which we used to develop our testbed.
|
249 |
On Stock Index Volatility With Respect to CapitalizationPachentseva, Marina, Bronskaya, Anna January 2007 (has links)
<p>Condfidence in the future is a signicant factor for business development. However frequently, accurate and specific purposes are spread over the market environment influence.Thus,it is necessary to make an appropriate consideration of instability, which is peculiar to the dynamic development. Volatility, variance and standard deviation are used to</p><p>characterize the deviation of the investigated quantity from mean value.</p><p>Volatility is one of the main instruments to measure the risk of the asset.</p><p>The increasing availability of financial market data has enlarged volatility research potential but has also encouraged research into longer horizon volatility forecasts.</p><p>In this paper we investigate stock index volatility with respect to capitalization with help of GARCH-modelling.</p><p>There are chosen three indexes of OMX Nordic Exchange for our research. The Nordic list segment indexes comprising Nordic Large Cap,</p><p>Mid Cap and Small Cap are based on the three market capitalization groups.</p><p>We implement GARCH-modeling for considering indexes and compare our results in order to conclude which ones of the indexes is more volatile.</p><p>The OMX Nordic list indexis quiet new(2002)and reorganized as late as October 2006. The current value is now about 300 and no options do exist. In current work we are also interested in estimation of the Heston</p><p>model(SVmodel), which is popular in financial world and can be used in option pricing in the future.</p><p>The results of our investigations show that Large Cap Index is more volatile then Middle and Small Cap Indexes.</p>
|
250 |
How reliable is implied volatility A comparison between implied and actual volatility on an index at the Nordic MarketKozyreva, Maria January 2007 (has links)
<p>Volatility forecast plays a central role in the financial decision making process. An intrinsic purpose of any investor is profit earning. For that purpose investors need to estimate the risk. One of the most efficient</p><p>methods to this end is the volatility estimation. In this theses I compare the CBOE Volatility Index, (VIX) with the actual volatility on an index at the Nordic Market. The actual volatility is defined as the one-day-ahead prediction as calculated by using the GARCH(1,1) model. By using the VIX model I performed consecutive predictions 30 days ahead between February the 2nd, 2007 to March</p><p>the 6th, 2007. These predictions were compared with the GARCH(1,1) one-day-ahead predictions for the same period. To my knowledge, such comparisons have not been performed earlier on the Nordic Market. The conclusion of the study was that the VIX predictions tends to higher values then the GARCH(1,1) predictions except for large prices upward jumps, which indicates that the VIX is not able to predict future shocks.</p><p>Except from these jumps, the VIX more often shows larger value than the GARCH(1,1). This is interpreted as an uncertainly of the prediction. However, the VIX predictions follows the actual volatility reasonable</p><p>well. I conclude that the VIX estimation can be used as a reliable estimator of market volatility.</p>
|
Page generated in 0.0519 seconds