• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4066
  • 1867
  • 513
  • 276
  • 239
  • 219
  • 134
  • 111
  • 84
  • 83
  • 61
  • 53
  • 53
  • 38
  • 35
  • Tagged with
  • 8620
  • 2082
  • 1831
  • 1826
  • 1210
  • 1202
  • 1181
  • 1018
  • 727
  • 719
  • 678
  • 532
  • 461
  • 428
  • 366
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
241

Impact of extensive green roofs on energy performance of school buildings in four North American climates

Mahmoodzadeh, Milad 31 May 2018 (has links)
Buildings are one of the major consumers of energy and make up a considerable portion in the generation of greenhouse gases. Green roofs are regarded as an appropriate strategy to reduce the heating and cooling loads in buildings. However, their energy performance is influenced by different design parameters which should be optimized based on the corresponding climate zone. Previous investigations mainly analyzed various design parameters in a single climate zone. However, the interaction of parameters in different climate zones was not considered. Also, the studies have been conducted mostly for commercial or residential buildings. Among different building types, schools with large roof surface are one of the major consumers of energy in North America. However, the literature review shows the lack of study on the effect of green roof on the thermal and energy performance of this type of building. This study performs a comprehensive parametric analysis to evaluate the influence of the green roof design parameters on the thermal or energy performance of a secondary school building in four climate zones in North America (i.e. Toronto, ON; Vancouver, BC; Las Vegas, NV and Miami, FL). Soil moisture content, soil thermal properties, leaf area index, plant height, leaf albedo, thermal insulation thickness and soil thickness were used as variables. Optimal parameters of green roofs were found to be closely related to meteorological conditions in each city. In terms of energy savings, the results show that the light substrate has better thermal performance for the uninsulated green roof. Also, the recommended soil thickness and leaf area index in the four cities are 0.15 m and 5, respectively. The optimal plant height for the cooling dominated climates is 0.3 m and for the heating dominated cities are 0.1 m. The plant albedo had the least impact on the energy consumption while it is effective in mitigation effect of heat island effect. Finally, unlike the cooling load which is largely influenced by the substrate and vegetation, the heating load is considerably affected by the thermal insulation instead of green roof design parameters. / Graduate
242

Development of a dryland corn productivity index for Kansas

Bush, Erin January 1900 (has links)
Master of Science / Department of Agronomy / Michel D. Ransom / For many decades, researchers have created indices to rate soil on its ability to produce vegetative growth. The Soil Rating for Plant Growth (SRPG) model was developed by USDA-Natural Resources Conservation Service (NRCS) in 1992 to array soil mapping units relative to their potential to produce dryland commodity crops independent of management. A few years later, the Kansas Department of Revenue (KDR) Property Valuation Division (PVD) began using the SRPG model for land valuation. Since then, the SRPG was updated to a Kansas-specific model, KS-SRPG, later renamed and modified to PRGM-General Crop Production Index (GCPI), and stored in the National Soil Information System (NASIS). In 2003, modifications were made to the GCPI model to develop an irrigated index for Kansas and was termed the Kansas Irrigated Productivity Index (KIPI). KS-SRPG and KIPI are still used by the PVD, but are no longer updated, are not available to the public, and are difficult to understand. Therefore, it is necessary to construct a new model to predict dryland corn productivity for Kansas soil mapping units. This thesis calibrated and validated a new dryland corn index, which is termed the Kansas Commodity Crop Productivity Index (KCCPI) corn submodel. The KCCPI model was built in NASIS with the goal of being available to the public on Web Soil Survey. Corn yield data in NASIS were used to calibrate the model during development. Dryland corn yield data were obtained from Risk Management Agency (RMA) by Common Land Unit (CLU) and regressed against KCCPI for validation. Results during calibration were promising, but KCCPI was not as successful during validation. This suggests that more work needs to be done to the model with more sets of yield data.
243

Avaliação da medida h-index na ordenação de resultados na blogosfera

Costa, Tiago Valente da January 2009 (has links)
Tese de mestrado integrado. Engenharia Electrotécnica e de Computadores (Major Telecomunicações). Faculdade de Engenharia. Universidade do Porto. 2009
244

Application of Image Recognition Technology to Foraminiferal Assemblage Analyses

Gfatter, Christian Helmut 12 October 2018 (has links)
Analyses of foraminiferal assemblages involve time consuming microscopic assessment of sediment samples. Image recognition software, which systematically matches features within sample images against an image library, is widely used in contexts ranging from law enforcement to medical research. At present, scientific applications such as identification of specimens in plankton samples utilize flow through systems in which samples are suspended in liquid and pass through a beam of light where the images are captured using transmitted light. Identification of foraminifers generally utilizes reflected light, because most shells are relatively opaque. My goal was to design and test a protocol to directly image foraminiferal specimens using reflected light and then apply recognition software to those images. A library of high quality digital images was established by photographing foraminifers identified conventionally from sediment samples from the west Florida shelf. Recognition software, VisualSpreadsheet™ by Fluid Imaging Technologies, Inc., was then trained to improve automated assemblage counts and those results were compared to results from direct visual assessment. The auto classification feature produced composite accuracies of foraminiferal groups in the range of 60–70% compared to traditional visual identification by a researcher using a stereo microscope. Site SC34, the source of images for the original image library, had an initial accuracy of 75% that was improved slightly through an alteration to one of the software classes, but composite accuracy plateaued at 60% with the updated filters. Thus, image acquisition advancements and further development of image recognition software will be required to improve automated or semi automated foraminiferal classifications. However, other potential applications were noted. For example, an advantage of acquiring digital images of entire samples or subsamples is the ability to collect quantitative data such as diameter and length, allowing size-frequency assessments of foraminiferal populations while possibly automating grain size analyses without requiring separate processing. In addition, data files of library and sample specimens can be readily shared with other researchers.
245

Why we should stop using the Kogut-Singh-Index

Konara, Palitha, Mohr, Alexander 06 1900 (has links) (PDF)
The Kogut and Singh (1988) index is the most widely used construct to measure cultural distance in international business and management research. We show that this index is incorrectly specified and captures the squared cultural distance. This inaccuracy is problematic because it means that the empirical findings on the effects of cultural distance presented in different strands of international business research are likely to be misleading. We specify the correct form of the distance measure based on the Euclidean distance formula and demonstrate the implications of using the incorrectly specified Kogut and Singh (1988) index.
246

Ethnic and cultural influences on body composition, lifestyle and body image among males

Kagawa, Masaharu January 2004 (has links)
The aim of this research was to determine ethnic and cultural influences on body composition, lifestyle, and aspects of body image (perception, acceptability, and satisfaction) of younger (age 18-40 years) Australian and Japanese males, the latter including groups living in Australia and Japan. The sample sizes of the three groups were 68 Japanese living in Australia, 84 Japanese living in Japan, and 72 Australian Caucasian males respectively. The methodology included body composition assessments (by anthropometry and DXA), lifestyle and body image questionnaires, and dietary records. The study found significant p<0.05) ethnic differences in the %BF at given BMI levels and for Japanese the BMI values of 23.6kg/m2 and 28.6kg/m2 were found to be equivalent to 25 and 30 for Caucasians when used to classify individuals as "overweight" and "obese". Equations in common use for the calculation of body composition in Japanese males were evaluated using modern methods of body composition assessment and found to need considerable modification. New regression equations that represent BMI-%BF relationships for Japanese and Australians were proposed: Japanese: Log %BF = -1.330 + 1.896(log BMI), (R2 = 0.547, SEE = 0.09); Australians: Log %BF = -1.522 + 2.001(log BMI), (R2 = 0.544, SEE = 0.10). Equations were also developed to predict %BF for Japanese and Australian males from body composition assessments using anthropometry and DXA: Japanese: %BF = 0.376 + 0.402(abdominal) + 0.772(medial calf) + 0.217(age), (R2 = 0.786, SEE = 2.69); Australians: %BF = 2.184 + 0.392(medial calf) + 0.678(supraspinale) + 0.467(triceps), (R2 = 0.864, SEE = 2.37). Lifestyle factors were found to influence perceptions of body image. / Australian males participate in physical activity more frequently than their Japanese counterparts (Australians = 98.6% involved in vigorous activity at least once per week, Japanese living in Japan = 85.7%, Japanese living in Australia = 72.1%). Significant differences p<0.05) in energy contribution patterns were found between the Japanese group (Protein: 14.4%, Carbohydrate: 50.4%, Fat: 28.1%) and Japanese living in Australia (JA: Protein: 16.3%, Carbohydrate: 47.3%, Fat: 32.3%) and the Australians (Protein: 17.1%, Carbohydrate: 47.9%, Fat: 30.6%). This shows that the Japanese living in Australia have adopted a more westemised diet than those living in Japan. Body Image assessments were done on all study groups using the Somatomorphic Matrix (SM) computer program and questionnaires, including the Ben-Tovim Walker Body Attitudes Questionnaires, (BAQ) the Attention to the Body Shape Scale (ABS), and the Eating Attitudes Test (EAT). Japanese males tended to overestimate their weight and amount of body fat, while Australian Caucasian males underestimated these parameters. The Japanese groups had higher scores on the selfdisparagement subscale and lower scores on the strengths and the attractiveness subscales of the BAQ questionnaire than Australian males. Australian males also had higher scores on the EAT total score and the dieting subscale of the EAT questionnaire than Japanese males. When all groups of subjects selected their perceived body image from the SM program menu, these results had no relationship with measured body composition values, suggesting that further development of this program is needed for use in these populations.
247

Effective web crawlers

Ali, Halil, hali@cs.rmit.edu.au January 2008 (has links)
Web crawlers are the component of a search engine that must traverse the Web, gathering documents in a local repository for indexing by a search engine so that they can be ranked by their relevance to user queries. Whenever data is replicated in an autonomously updated environment, there are issues with maintaining up-to-date copies of documents. When documents are retrieved by a crawler and have subsequently been altered on the Web, the effect is an inconsistency in user search results. While the impact depends on the type and volume of change, many existing algorithms do not take the degree of change into consideration, instead using simple measures that consider any change as significant. Furthermore, many crawler evaluation metrics do not consider index freshness or the amount of impact that crawling algorithms have on user results. Most of the existing work makes assumptions about the change rate of documents on the Web, or relies on the availability of a long history of change. Our work investigates approaches to improving index consistency: detecting meaningful change, measuring the impact of a crawl on collection freshness from a user perspective, developing a framework for evaluating crawler performance, determining the effectiveness of stateless crawl ordering schemes, and proposing and evaluating the effectiveness of a dynamic crawl approach. Our work is concerned specifically with cases where there is little or no past change statistics with which predictions can be made. Our work analyses different measures of change and introduces a novel approach to measuring the impact of recrawl schemes on search engine users. Our schemes detect important changes that affect user results. Other well-known and widely used schemes have to retrieve around twice the data to achieve the same effectiveness as our schemes. Furthermore, while many studies have assumed that the Web changes according to a model, our experimental results are based on real web documents. We analyse various stateless crawl ordering schemes that have no past change statistics with which to predict which documents will change, none of which, to our knowledge, has been tested to determine effectiveness in crawling changed documents. We empirically show that the effectiveness of these schemes depends on the topology and dynamics of the domain crawled and that no one static crawl ordering scheme can effectively maintain freshness, motivating our work on dynamic approaches. We present our novel approach to maintaining freshness, which uses the anchor text linking documents to determine the likelihood of a document changing, based on statistics gathered during the current crawl. We show that this scheme is highly effective when combined with existing stateless schemes. When we combine our scheme with PageRank, our approach allows the crawler to improve both freshness and quality of a collection. Our scheme improves freshness regardless of which stateless scheme it is used in conjunction with, since it uses both positive and negative reinforcement to determine which document to retrieve. Finally, we present the design and implementation of Lara, our own distributed crawler, which we used to develop our testbed.
248

On Stock Index Volatility With Respect to Capitalization

Pachentseva, Marina, Bronskaya, Anna January 2007 (has links)
<p>Condfidence in the future is a signicant factor for business development. However frequently, accurate and specific purposes are spread over the market environment influence.Thus,it is necessary to make an appropriate consideration of instability, which is peculiar to the dynamic development. Volatility, variance and standard deviation are used to</p><p>characterize the deviation of the investigated quantity from mean value.</p><p>Volatility is one of the main instruments to measure the risk of the asset.</p><p>The increasing availability of financial market data has enlarged volatility research potential but has also encouraged research into longer horizon volatility forecasts.</p><p>In this paper we investigate stock index volatility with respect to capitalization with help of GARCH-modelling.</p><p>There are chosen three indexes of OMX Nordic Exchange for our research. The Nordic list segment indexes comprising Nordic Large Cap,</p><p>Mid Cap and Small Cap are based on the three market capitalization groups.</p><p>We implement GARCH-modeling for considering indexes and compare our results in order to conclude which ones of the indexes is more volatile.</p><p>The OMX Nordic list indexis quiet new(2002)and reorganized as late as October 2006. The current value is now about 300 and no options do exist. In current work we are also interested in estimation of the Heston</p><p>model(SVmodel), which is popular in financial world and can be used in option pricing in the future.</p><p>The results of our investigations show that Large Cap Index is more volatile then Middle and Small Cap Indexes.</p>
249

How reliable is implied volatility A comparison between implied and actual volatility on an index at the Nordic Market

Kozyreva, Maria January 2007 (has links)
<p>Volatility forecast plays a central role in the financial decision making process. An intrinsic purpose of any investor is profit earning. For that purpose investors need to estimate the risk. One of the most efficient</p><p>methods to this end is the volatility estimation. In this theses I compare the CBOE Volatility Index, (VIX) with the actual volatility on an index at the Nordic Market. The actual volatility is defined as the one-day-ahead prediction as calculated by using the GARCH(1,1) model. By using the VIX model I performed consecutive predictions 30 days ahead between February the 2nd, 2007 to March</p><p>the 6th, 2007. These predictions were compared with the GARCH(1,1) one-day-ahead predictions for the same period. To my knowledge, such comparisons have not been performed earlier on the Nordic Market. The conclusion of the study was that the VIX predictions tends to higher values then the GARCH(1,1) predictions except for large prices upward jumps, which indicates that the VIX is not able to predict future shocks.</p><p>Except from these jumps, the VIX more often shows larger value than the GARCH(1,1). This is interpreted as an uncertainly of the prediction. However, the VIX predictions follows the actual volatility reasonable</p><p>well. I conclude that the VIX estimation can be used as a reliable estimator of market volatility.</p>
250

Futures-Spot Arbitrage of Stock Index Futures in China : Empirical Study on Arbitrage Strategy

PENG, XUE, FANG, YU January 2010 (has links)
<p>The main purpose of this thesis is to investigate what is the optimal futures-spot arbitrage strategy for China‘s stock index futures investment. Specifically, Index replication method and no-arbitrage pricing model are examined. We compare the different combinations of ETFs portfolio in mainland China with W.I.S.E-CSI 300 ETF in Hong Kong in three aspects including liquidity level, correlation of ETFs with underlying index, and tracking error of the replication methods. Then, we add several new parameters into interval pricing model to obtain a more accurate no-arbitrage band. As a result, we found that the portfolio of SSE 50 ETF, SZSE 100 ETF, and SSE Bonus ETF could provide the best tracking effect of CSI 300 Index, with different weight as 0.369, 0.403, and 0.19 in turn separately. Furthermore, the new modified pricing model could find out more arbitrage opportunities than interval pricing model especially for reverse cash-and-carry arbitrage. On the whole, the optimal arbitrage strategy for investment on CSI 300 Index futures consist of two steps, implement ETFs portfolio replicate CSI 300 Index and using new modified pricing model to discover and define arbitrage opportunities then to apply futures-spot arbitrage. At the end of thesis, we also give a small case study to illustrate how to exercise the arbitrage strategy in realistic situation.</p>

Page generated in 0.0347 seconds