• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 15
  • 3
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 32
  • 32
  • 10
  • 7
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Regional Flood Frequency Analysis For Ceyhan Basin

Sahin, Mehmet Altug 01 January 2013 (has links) (PDF)
Regional flood frequency techniques are commonly used to estimate flood quantiles when flood data are unavailable or the record length at an individual gauging station is insufficient for reliable analyses. These methods compensate for limited or unavailable data by pooling data from nearby gauged sites. This requires the delineation of hydrologically homogeneous regions in which the flood regime is sufficiently similar to allow the spatial transfer of information. Therefore, several Regional Flood Frequency Analysis (RFFA) methods are applied to the Ceyhan Basin. Dalyrmple (1960) Method is applied as a common RFFA method used in Turkey. Multivariate statistical techniques which are Stepwise and Nonlinear Regression Analysis are also applied to flood statistics and basin characteristics for gauging stations. Rainfall, Perimeter, Length of Main River, Circularity, Relative Relief, Basin Relief, Hmax, Hmin, Hmean and H&Delta / are the simple additional basin characteristics. Moreover, before the analysis started, stations are clustered according to their basin characteristics by using the combination of Ward&rsquo / s and k-means clustering techniques. At the end of the study, the results are compared considering the Root Mean Squared Errors, Nash-Sutcliffe Efficiency Index and % difference of results. Using additional basin characteristics and making an analysis with multivariate statistical techniques have positive effect for getting accurate results compared to Dalyrmple (1960) Method in Ceyhan Basin. Clustered region data give more accurate results than non-clustered region data. Comparison between clustered region and non-clustered region Q100/Q2.33 reduced variate values for whole region is 3.53, for cluster-2 it is 3.43 and for cluster-3 it is 3.65. This show that clustering has positive effect in the results. Nonlinear Regression Analysis with three clusters give less errors which are 29.54 RMSE and 0.735 Nash-Sutcliffe Index, when compared to other methods in Ceyhan Basin.
12

Probability Modelling of Alpine Permafrost Distribution in Tarfala Valley, Sweden

Alm, Micael January 2017 (has links)
Datainsamling har genomförts i Tarfaladalen under 5 dagar vid månadsskiftet mellan mars och april 2017. Insamlingen resulterade i 36 BTS-mätningar (Bottom Temperature of Snow cover) som därefter har använts tillsammans med data från tidigare insamlingar, till att skapa en sammanställd modell över förekomsten av permafrost omkring Tarfala. En statistisk undersökning syftade till att identifiera meningsfulla parametrar som permafrost beror av, genom att testa de oberoende variablerna mot BTS i en stegvis regression. De oberoende faktorerna höjd över havet, aspekt, solinstrålning, vinkel och gradient hos sluttningar producerades för varje undersökt BTS-punkt i ett geografiskt informationssystem.                 Den stegvisa regressionen valde enbart höjden som signifikant variabel, höjden användes i en logistisk regression för att modellera permafrostens utbredning. Den slutliga modellen visade att permafrostens sannolikhet ökar med höjden. För att skilja mellan kontinuerlig, diskontinuerlig och sporadisk permafrost delades modellen in i tre zoner med olika sannolikhetsspann. Den kontinuerliga permafrosten är högst belägen och därav den zon där sannolikheten för permafrost är störst, denna zon gränsar till den diskontinuerliga permafrosten vid en höjd på 1523 m. Den diskontinuerliga permafrosten har en sannolikhet mellan 50–80 % och dess undre gräns på 1108 m.ö.h. separerar den diskontinuerliga zonen från den sporadiska permafrosten / A field data collection has been carried out in Tarfala valley at the turn of March to April 2017. The collection resulted in 36 BTS-measurements (Bottom Temperature of Snow cover) that has been used in combination with data from earlier surveys, to create a model of the occurrence of permafrost around Tarfala. To identify meaningful parameters that permafrost relies on, independent variables were tested against BTS in a stepwise regression. The independent variables elevation, aspect, solar radiation, slope angle and curvature were produced for each investigated BTS-point in a geographic information system.                 The stepwise regression selected elevation as the only significant variable, elevation was applied to a logistic regression to model the permafrost occurrence. The final model showed that the probability of permafrost increases with height. To distinguish between continuous, discontinuous and sporadic permafrost, the model was divided into three zones with intervals of probability. The continuous permafrost is the highest located zone and therefore has the highest likelihood, this zone delimits the discontinuous permafrost at 1523 m a.s.l. The discontinuous permafrost has probabilities between 50-80 % and its lower limit at 1108 m a.s.l. separates the discontinuous zone from the sporadic permafrost.
13

Sensitivity analysis and evolutionary optimization for building design

Wang, Mengchao January 2014 (has links)
In order to achieve global carbon reduction targets, buildings must be designed to be energy efficient. Building performance simulation methods, together with sensitivity analysis and evolutionary optimization methods, can be used to generate design solution and performance information that can be used in identifying energy and cost efficient design solutions. Sensitivity analysis is used to identify the design variables that have the greatest impacts on the design objectives and constraints. Multi-objective evolutionary optimization is used to find a Pareto set of design solutions that optimize the conflicting design objectives while satisfying the design constraints; building design being an inherently multi-objective process. For instance, there is commonly a desire to minimise both the building energy demand and capital cost while maintaining thermal comfort. Sensitivity analysis has previously been coupled with a model-based optimization in order to reduce the computational effort of running a robust optimization and in order to provide an insight into the solution sensitivities in the neighbourhood of each optimum solution. However, there has been little research conducted to explore the extent to which the solutions found from a building design optimization can be used for a global or local sensitivity analysis, or the extent to which the local sensitivities differ from the global sensitivities. It has also been common for the sensitivity analysis to be conducted using continuous variables, whereas building optimization problems are more typically formulated using a mixture of discretized-continuous variables (with physical meaning) and categorical variables (without physical meaning). This thesis investigates three main questions; the form of global sensitivity analysis most appropriate for use with problems having mixed discretised-continuous and categorical variables; the extent to which samples taken from an optimization run can be used in a global sensitivity analysis, the optimization process causing these solutions to be biased; and the extent to which global and local sensitivities are different. The experiments conducted in this research are based on the mid-floor of a commercial office building having 5 zones, and which is located in Birmingham, UK. The optimization and sensitivity analysis problems are formulated with 16 design variables, including orientation, heating and cooling setpoints, window-to-wall ratios, start and stop time, and construction types. The design objectives are the minimisation of both energy demand and capital cost, with solution infeasibility being a function of occupant thermal comfort. It is concluded that a robust global sensitivity analysis can be achieved using stepwise regression with the use of bidirectional elimination, rank transformation of the variables and BIC (Bayesian information criterion). It is concluded that, when the optimization is based on a genetic algorithm, that solutions taken from the start of the optimization process can be reliably used in a global sensitivity analysis, and therefore, there is no need to generate a separate set of random samples for use in the sensitivity analysis. The extent to which the convergence of the variables during the optimization can be used as a proxy for the variable sensitivities has also been investigated. It is concluded that it is not possible to identify the relative importance of variables through the optimization, even though the most important variable exhibited fast and stable convergence. Finally, it is concluded that differences exist in the variable rankings resulting from the global and local sensitivity methods, although the top-ranked solutions from each approach tend to be the same. It also concluded that the sensitivity of the objectives and constraints to all variables is obtainable through a local sensitivity analysis, but that a global sensitivity analysis is only likely to identify the most important variables. The repeatability of these conclusions has been investigated and confirmed by applying the methods to the example design problem with the building being located in four different climates (Birmingham, UK; San Francisco, US; and Chicago, US).
14

Modelação e análise da vida útil (metrológica) de medidores tipo indução de energia elétrica ativa /

Silva, Marcelo Rubia da. January 2010 (has links)
Orientador: Carlos Alberto Canesin / Banca: Júlio Borges de Souza / Banca: Denizar Cruz Martins / Resumo: O estudo da confiabilidade operacional de equipamentos se tornou fundamental para as empresas possuírem o devido controle dos seus ativos, tanto pelo lado financeiro quanto em questões de segurança. O estudo da taxa de falha de equipamentos prevê quando as falhas irão ocorrer possibilitando estabelecer atitudes preventivas, porém, seu estudo deve ser realizado em condições de operação estabelecidas e fixas. Os medidores de energia elétrica, parte do ativo financeiro das concessionárias de energia, são equipamentos utilizados em diversas condições de operação, tanto nas condições do fluxo de energia, tais como presenças de harmônicos, subtensões, sobre-tensões e padrões de consumo distintos, quanto pelo local físico de instalação, tais como maresia, temperatura, umidade, etc. As falhas nos medidores eletromecânicos de energia elétrica são de difícil constatação uma vez que a maioria dos erros de medição, ocasionados principalmente por envelhecimento de componentes, não alteram a qualidade da energia fornecida e nem interrompem o seu fornecimento. Neste sentido, este trabalho propõe uma nova metodologia de determinação de falhas em medidores eletromecânicos de energia elétrica ativa. Faz-se uso de banco de dados de uma concessionária de energia elétrica e do processo de descoberta de conhecimento em bases de dados para selecionar as variáveis mais significativas na determinação de falhas em medidores eletromecânicos de energia elétrica ativa, incluindo no conjunto de falhas a operação com erros de medição acima do permitido pela legislação nacional (2010). Duas técnicas de mineração de dados foram utilizadas: regressão stepwise e árvores de decisão. As variáveis obtidas foram utilizadas na construção de um modelo de agrupamento de equipamentos associando a cada grupo uma probabilidade... (Resumo completo, clicar acesso eletrônico abaixo) / Abstract: The operational reliability study of equipments has become primal in order to enterprises have the righteous control over their assets, both by financial side as by security reasons. The study for the hazard rate of equipments allows to foresee the failures for the equipments and to act preventively, but this study must be accomplished under established and fixed operation conditions. The energy meters, for their part, are equipments utilized in several operating conditions so on the utilization manner, like presence of harmonics, undervoltages and over-voltages and distinct consumption patterns, as on the installation location, like swel, temperature, humidity, etc. Failures in electromechanical Wh-meters are difficult to detect once that the majority of metering errors occurred mainly by aging of components do not change the quality of offered energy neither disrupt its supply. In this context, this work proposes a novel methodology to obtain failure determination for electromechanical Whmeters. It utilizes Wh-databases from an electrical company and of the process of knowledge discovery in databases to specify the most significant variables in determining failures in electromechanical Wh-meters, including in the failure set the operation with metering errors above those permitted by national regulations (2010). Two techniques of data mining were used in this work: stepwise regression and decision trees. The obtained variables were utilized on the construction of a model of clustering similar equipments and the probability of failure of those clusters were determined. As final results, an application in a friendly platform were developed in order to apply the methodology, and a case study was accomplished in order to demonstrate its feasibility. / Mestre
15

Risk factor modeling of Hedge Funds' strategies / Risk factor modeling of Hedge Funds' strategies

Radosavčević, Aleksa January 2017 (has links)
This thesis aims to identify main driving market risk factors of different strategies implemented by hedge funds by looking at correlation coefficients, implementing Principal Component Analysis and analyzing "loadings" for first three principal components, which explain the largest portion of the variation of hedge funds' returns. In the next step, a stepwise regression through iteration process includes and excludes market risk factors for each strategy, searching for the combination of risk factors which will offer a model with the best "fit", based on The Akaike Information Criterion - AIC and Bayesian Information Criterion - BIC. Lastly, to avoid counterfeit results and overcome model uncertainty issues a Bayesian Model Average - BMA approach was taken. Key words: Hedge Funds, hedge funds' strategies, market risk, principal component analysis, stepwise regression, Akaike Information Criterion, Bayesian Information Criterion, Bayesian Model Averaging Author's e-mail: aleksaradosavcevic@gmail.com Supervisor's e-mail: mp.princ@seznam.cz
16

Comparing Assessment Methods As Predictors Of Student Learning In Undergraduate Mathematics

Shorter, Nichole 01 January 2008 (has links)
This experiment was designed to determine which assessment method: continuous assessment (in the form of daily in-class quizzes), cumulative assessment (in the form of online homework), or project-based learning, best predicts student learning (dependent upon posttest grades) in an undergraduate mathematics course. Participants included 117 university-level undergraduate freshmen enrolled in a course titled "Mathematics for Calculus". Initially, a multiple regression model was formulated to model the relationship between the predictor variables (the continuous assessment, cumulative assessment, and project scores) versus the outcome variable (the posttest scores). However, due to the possibility of multicollinearity present between the cumulative assessment predictor variable and the continuous assessment predictor variable, a stepwise regression model was implemented and caused the cumulative assessment predictor variable to be forced out of the resulting model, based on the results of statistical significance and hypothesis testing. The finalized stepwise regression model included continuous assessment scores and project scores as predictor variables of students' posttest scores with a 99% confidence level. Results indicated that ultimately the continuous assessment scores best predicted students' posttest scores.
17

Development of a geovisual analytics environment using parallel coordinates with applications to tropical cyclone trend analysis

Steed, Chad A 13 December 2008 (has links)
A global transformation is being fueled by unprecedented growth in the quality, quantity, and number of different parameters in environmental data through the convergence of several technological advances in data collection and modeling. Although these data hold great potential for helping us understand many complex and, in some cases, life-threatening environmental processes, our ability to generate such data is far outpacing our ability to analyze it. In particular, conventional environmental data analysis tools are inadequate for coping with the size and complexity of these data. As a result, users are forced to reduce the problem in order to adapt to the capabilities of the tools. To overcome these limitations, we must complement the power of computational methods with human knowledge, flexible thinking, imagination, and our capacity for insight by developing visual analysis tools that distill information into the actionable criteria needed for enhanced decision support. In light of said challenges, we have integrated automated statistical analysis capabilities with a highly interactive, multivariate visualization interface to produce a promising approach for visual environmental data analysis. By combining advanced interaction techniques such as dynamic axis scaling, conjunctive parallel coordinates, statistical indicators, and aerial perspective shading, we provide an enhanced variant of the classical parallel coordinates plot. Furthermore, the system facilitates statistical processes such as stepwise linear regression and correlation analysis to assist in the identification and quantification of the most significant predictors for a particular dependent variable. These capabilities are combined into a unique geovisual analytics system that is demonstrated via a pedagogical case study and three North Atlantic tropical cyclone climate studies using a systematic workflow. In addition to revealing several significant associations between environmental observations and tropical cyclone activity, this research corroborates the notion that enhanced parallel coordinates coupled with statistical analysis can be used for more effective knowledge discovery and confirmation in complex, real-world data sets.
18

A Study of the Influence Undergraduate Experiences Have onStudent Performance on the Graduate Management Admission Test

Plessner, Von Roderick January 2014 (has links)
No description available.
19

The association between working capital measures and the returns of South African industrial firms

Smith, Marolee Beaumont 12 1900 (has links)
This study investigates the association between traditional and alternative working capital measures and the returns of industrial firms listed on the Johannesburg Stock E"change. Twenty five variables for all industrial firms listed for the most recent 10 years were derived from standardised annual balance sheet data of the University of Pretoria's Bureau of Financial Analysis. Traditional liquidity ratios measuring working capital position, activity and leverage, and alternative liquidity measures, were calculated for each of the 135 participating firms for the 1 0 years. These working capital measures were tested for association with five return measures for every firm over the same period. This was done by means of a chi-square test for association, followed by stepwise multiple regression undertaken to quantify the underlying structural relationships between the return measures and the working capital measures. The results of the tests indicated that the traditional working capital leverage measures, in particular, total current liabilities divided by funds flow, and to a lesser e"tent, long-term loan capital divided by net working capital, displayed the greatest associations, and e"plained the majority of the variance in the return measures. At-test, undertaken to analyse the size effect on the working capital measures employed by the participating firms, compared firms according to total assets. The results revealed significant differences between the means of the top quartile of firms and the bottom quartile, for eight of the 13 working capital measures included in the study. A nonparametric test was applied to evaluate the sector effect on the working capital measures employed by the participating firms. The rank scores indicated significant differences in the means across the sectors for si" of the 13 working capital measures. A decrease in the working capital leverage measures of current liabilities divided by funds flow, and long-term loan capital divided by net working capital, should signal an increase in returns, and vice versa. It is recommended that financial managers consider these findings when forecasting firm returns. / Business Management / D. Com. (Business Management)
20

台灣地區上市公司股票評價模式之研究-以電器電纜業為例

洪美慧, Hong, Mei-Huei Unknown Date (has links)
有鑑於國內投資人已漸漸注重基本分析,因此本研究將以電器電纜業為例,針對一般股票評價模式作研究。首先比較各種評價理論所計算出的結果與市場實際價格之間的落差,進行研究之後,分析各種評價法落差的情形,進而尋求對電器電纜業最適當的評價方法。並以此預測電器電纜公司88年底之實質價值,再與實際價格比較之後,提供投資大眾買進賣出之參考。 首先將過去文獻資料中影響公司成長的因素,利用相關分析以及逐步迴歸法,找出影響電器電纜公司銷售額成長的主要因素,以這些因素建構一條迴歸方程式,作為計算各公司未來成長率的依據。以過去最常用到的六種評價模式:現金流量折現法、會計盈餘折現法、本益比法、價格/帳面價值比法、價格/銷售比法及選擇權定價法來研究其效果。實際作法是分別計算各電器電纜公司79年至83年的價值,再與其各年度之實際股價比較,以Theil’s U值找出最佳之評價模式。最後則是利用所選出最適合我國電器電纜業的股票評價模式,配合第一部份所得之成長率,推算電器電纜業公司89-93年之財務報表,藉以算出其88年底的實質價值。 本研究之實證結果為:由總體經濟自變數之相關分析中可得,本研究在經濟面採用我國經濟成長率(E1)、台幣兌美元匯率(E2)、躉受物價指數(E4)、貨幣市場利率(E5)以及股票價格指數(E6)等五個變數;再加上影響公司營運成果的11個財務比率,引入逐步迴歸模式中。結果發現電器電纜業銷售額模式中,投入變數順序為固定資產週轉率(C4)、存貨週轉率(C3)、總資產週轉率(C5)、賺得利息倍數(C7)、躉售物價指數(E4)、負債比率(C6)以及貨幣市場利率(E5)。第二部分的實證結果結果發現市價帳面價值法為電線電纜業最佳的評價模式,其次為市價銷售額法。因此本研究最後以市價帳面價值法來計算電器電纜公司88年底之股價,提供大眾作為投資時的參考。

Page generated in 0.0721 seconds