• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1740
  • 414
  • 161
  • 72
  • 54
  • 54
  • 50
  • 50
  • 50
  • 50
  • 50
  • 48
  • 40
  • 37
  • 34
  • Tagged with
  • 3207
  • 437
  • 430
  • 381
  • 364
  • 304
  • 291
  • 264
  • 262
  • 243
  • 231
  • 229
  • 225
  • 216
  • 211
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
891

Sensor and model integration for the rapid prediction of concurrent flow flame spread

Cowlard, Adam January 2009 (has links)
Fire Safety Engineering is required at every stage in the life cycle of modern-day buildings. Fire safety design, detection and suppression, and emergency response are all vital components of Structural Fire Safety but are usually perceived as independent issues. Sensor deployment and exploitation is now common place in modern buildings for means such as temperature, air quality and security management. Despite the potential wealth of information these sensors could afford fire fighters, the design of sensor networks within buildings is entirely detached from procedures associated to emergency management. The experiences of Dalmarnock Fire Test Two showed that streams of raw data emerging from sensors lead to a rapid information overload and do little to improve the understanding of the complex phenomenon and likely future events during a real fire. Despite current sensor technology in other fields being far more advanced than that of fire, there is no justification for more complex and expensive sensors in this context. In isolation therefore, sensors are not sufficient to aid emergency response. Fire modelling follows a similar path. Two studies of Dalmarnock Fire Test One demonstrate clearly the current state of the art of fire modelling. A Priori studies by Rein et al. 2009 showed that blind prediction of the evolution of a compartment fire is currently beyond the state of the art of fire modelling practice. A Posteriori studies by Jahn et al. 2007 demonstrated that even with the provision of large quantities of sensor data, video footage, and prior knowledge of the fire; producing a CFD reconstruction was an incredibly difficult, laborious, intuitive and repetitive task. Fire fighting is therefore left as an isolated activity that does not benefit from sensor data or the potential of modelling the event. In isolation sensors and fire modelling are found lacking. Together though they appear to form the perfect compliment. Sensors provide a plethora of information which lacks interpretation. Models provide a method of interpretation but lack the necessary information to make this output robust. Thus a mechanism to achieve accurate, timely predictions by means of theoretical models steered by continuous calibration against sensor measurements is proposed. Issues of accuracy aside, these models demand heavy resources and computational time periods that are far greater than the time associated with the processes being simulated. To be of use to emergency responders, the output would need to be produced faster than the event itself with lead time to enable planning of an intervention strategy. Therefore in isolation, model output is not robust or fast enough to be implemented in an emergency response scenario. The concept of super-real time predictions steered by measurements is studied in the simple yet meaningful scenario of concurrent flow flame spread. Experiments have been conducted with PMMA slabs to feed sensor data into a simple analytical model. Numerous sensing techniques have been adapted to feed a simple algebraic expression from the literature linking flame spread, flame characteristics and pyrolysis evolution in order to model upward flame spread. The measurements are continuously fed to the computations so that projections of the flame spread velocity and flame characteristics can be established at each instant in time, ahead of the real flame. It was observed that as the input parameters in the analytical models were optimised to the scenario, rapid convergence between the evolving experiment and the predictions was attained.
892

Forecasting brazilian inflation with singular spectrum analysis

Matsuoka, Danilo Hiroshi January 2016 (has links)
O objetivo deste artigo é avaliar previsões da inflação brasileira a partir do método não-paramétrico de Análise Espectral Singular (SSA). O exercício de previsão utiliza o esquema de janelas rolantes. Diferentes estratégias de combinação de previsões e procedimentos de seleção de variáveis para métodos multivariados foram contempladas. Para robustez, cinco horizontes de previsão foram utilizados. A avaliação das previsões considera diversos procedimentos e medidas estatísticas para oferecer conclusões confiáveis, incluindo razões de erro quadrático médio de previsão, teste de igualdade condicional de habilidade preditiva, diferenças de erro quadrático médio de previsão cumulativas e Model Confidence Set. Os resultados mostram que o SSA supera consistentemente os métodos competidores. Quase todas as previsões SSA superam os competidores em termos de erro quadrático médio de previsão, e em vários casos, com significância estatística. A análise da porção fora da amostra indica superioridade em performance relativa do SSA, especialmente no período de choque nos preços de energia elétrica. Adicionalmente, métodos SSA sempre foram incluídos no conjunto superior do Model Confidence Set. A falta de estudos relacionados com previsão da inflação brasileira e a relativa escassez de análises de previsões via métodos não-paramétricos ressaltam a relevância deste artigo. Não existem pesquisas na literatura de previsão de inflação brasileira aplicando SSA. Uma das estratégias de combinação de previsões aplicadas neste artigo não é comumente encontrada na literatura, na medida em que envolve combinações de diferentes especificações para cada método de previsão. Adicionalmente, restrições de parâmetros foram impostas nas previsões SSA, uma prática não reportada na literatura. / The purpose of this paper is to evaluate Brazilian inflation forecasts produced by the nonparametric method Singular Spectrum Analysis (SSA). This forecasting exercise employs rolling windows scheme. Different strategies of forecast combinations and variable selection procedures for multivariate methods were contemplated. For robustness, five forecast horizons were used. The forecast evaluation considers several statistical measures and procedures to offer reliable conclusions, including mean squared forecast error ratios, tests of equal conditional predictive ability, cumulative square forecast error difference and Model Confidence Set. The results show that SSA consistently outperforms the competitive methods. Almost all SSA forecasts outperforms the competitors in the mean squared forecast error sense, and several with statistical significance. Analysis of the out-of-sample portion indicates relative superior performance of SSA, especially over the period of electricity shock of prices. SSA methods were always included in the superior set of Model Confidence Set procedures. The lack of studies related to Brazilian inflation forecasting and the relative scarcity of nonparametric methods of forecasting analysis highlights the relevance of this paper. There is no research in Brazilian inflation literature applying SSA. One of the forecast combination strategies applied in this paper is not commonly found in the literature, as it involves combinations of different specifications for each forecast method. Additionally, parameter restrictions on SSA forecasts were imposed, a practice which is not reported in the literature.
893

Water Availability in a Warming World

Aminzade, Jennifer January 2011 (has links)
As climate warms during the 21st century, the resultant changes in water availability are a vital issue for society, perhaps even more important than the magnitude of warming itself. Yet our climate models disagree in their forecasts of water availability, limiting our ability to plan accordingly. This thesis investigates future water availability projections from Coupled Ocean-Atmosphere General Circulation Models (GCMs), primarily using two water availability measures: soil moisture and the Supply Demand Drought Index (SDDI). Chapter One introduces methods of measuring water availability and explores some of the fundamental differences between soil moisture, SDDI and the Palmer Drought Severity Index (PDSI). SDDI and PDSI tend to predict more severe future drought conditions than soil moisture; 21st century projections of SDDI show conditions rivaling North American historic mega-droughts. We compare multiple potential evapotranspiration (EP) methods in New York using input from the GISS Model ER GCM and local station data from Rochester, NY, and find that they compare favorably with local pan evaporation measurements. We calculate SDDI and PDSI values using various EP methods, and show that changes in future projections are largest when using EP methods most sensitive to global warming, not necessarily methods producing EP values with the largest magnitudes. Chapter Two explores the characteristics and biases of the five GCMs and their 20th and 21st century climate projections. We compare atmospheric variables that drive water availability changes globally, zonally, and geographically among models. All models show increases in both dry and wet extremes for SDDI and soil moisture, but increases are largest for extreme drying conditions using SDDI. The percentage of gridboxes that agree on the sign of change of soil moisture and SDDI between models is very low, but does increase in the 21st century. Still, differences between models are smaller than differences between SDDI and soil moisture projections. Chapter Three addresses the three major differences between SDDI and soil moisture calculations that shed light on why their future projections diverge: evaporation approximations, dependence on previous months' conditions, and the inclusion of additional variables such as runoff. We implement various changes in SDDI and a GCM vegetation scheme to test the sensitivity of each measure and to evaluate which alterations increase the similarity between SDDI and soil moisture. In addition to deconstructing the differences between SDDI and soil moisture, we analyze their projections regionally in Chapter Four. In seven regions (the southwest U.S., southern Europe, eastern China, eastern Siberia, Australia, Uruguay and Colombia), we 1) assess the forecasts of future water availability changes, 2) compare the atmospheric dynamical processes that produce rainfall and drought in the real world to the way it occurs in individual GCMs, 3) determine how these processes change as global temperatures increase, and 4) identify the most likely scenarios for future regional water availability. Chapter Five summarizes key findings by chapter, enumerating this dissertation's contributions to the field. It then discusses the limitations of existing models and measures, and suggests potential solutions for overcoming their predictive shortfalls. Finally, the chapter concludes with a proposal for future research to expand upon this dissertation work. This thesis highlights the global and zonal differences between two water availability measures, SDDI and soil moisture and identifies regions where they agree and disagree in 21st century modeled scenarios. It provides an explanation for differing projections in soil moisture and SDDI and proves that it is possible to bring convergence to their future projections, which is also applicable to PDSI. Finally, a detailed analysis of climatic changes from five GCMs made it possible to present the most likely scenarios for 21st century water availability in seven regions.
894

Simultaneous prediction intervals for multiple steps ahead forecasts in vector time series.

January 2007 (has links)
Yick, Kwok Leung. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2007. / Includes bibliographical references (leaves 67-68). / Abstracts in English and Chinese. / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- The importance of forecasting --- p.1 / Chapter 1.2 --- Objective --- p.3 / Chapter 2 --- Vector Autoregressive Model --- p.5 / Chapter 2.1 --- The VAR(p) model --- p.5 / Chapter 2.2 --- Least squares estimation method --- p.7 / Chapter 2.3 --- VAR order selection method --- p.10 / Chapter 2.4 --- Constructing simultaneous prediction intervals procedures --- p.11 / Chapter 2.4.1 --- Bonferroni procedure --- p.12 / Chapter 2.4.2 --- The 'Exact' procedure --- p.13 / Chapter 2.4.3 --- Two variables case --- p.15 / Chapter 2.4.4 --- Three variables case --- p.18 / Chapter 3 --- A System of Linear Equations with Exogenous Variables --- p.23 / Chapter 3.1 --- Restriction of VAR model --- p.23 / Chapter 3.2 --- Least squares estimation method --- p.24 / Chapter 3.3 --- Hsiao's sequential method for estimating the lag lengths --- p.26 / Chapter 3.3.1 --- Two variables case --- p.27 / Chapter 3.3.2 --- Three variables case --- p.29 / Chapter 3.4 --- Using VAR model to construct simultaneous prediction intervals --- p.32 / Chapter 3.4.1 --- Bonferroni procedure --- p.34 / Chapter 3.4.2 --- The 'Exact' procedure --- p.35 / Chapter 3.4.3 --- Two variables case --- p.36 / Chapter 3.4.4 --- Three variables case --- p.38 / Chapter 4 --- Illustrative Examples --- p.42 / Chapter 5 --- A Simulation Study --- p.52 / Chapter 5.1 --- Design of the experiment --- p.52 / Chapter 5.2 --- Simulation results --- p.58 / Chapter 5.3 --- Concluding remarks --- p.60 / Chapter 5.4 --- Further research --- p.60 / References --- p.67
895

The application of remotely sensed inner-core rainfall and surface latent heat flux in typhoon intensity forecast. / CUHK electronic theses & dissertations collection

January 2010 (has links)
A logistic regression model (LRRI) and a neural network model (NNRI) for RI forecasting of TCs are developed for the period 2000--2007. The five significant predictors are intensity change in the previous 12 h, intensification potential, lower-level relative humidity, eddy flux convergence at 200 hPa, and vertical wind shear. The verification of forecasts in 2008 typhoon season shows that NNRI outperforms LRRI for RI detection. / Despite improvements in statistical and dynamic models in recent years, the prediction of tropical cyclone (TC) intensity still lags that of track forecasting. Recent advances in satellite remote sensing coupled with artificial intelligence techniques offer us an opportunity to improve the forecasting skill of typhoon intensity. / In this study rapid intensification (RI) of TCs is defined as over-water minimum central pressure fall in excess of 20 hPa over a 24-h period. Composite analysis shows satellite-based surface latent heat flux (SLHF) and inner-core rain rate (IRR) are related to rapid intensifying TCs over the western North Pacific, suggesting SLHF and IRR have the potential to add value to TC intensity forecasting. / Several linear regression models and neural network models are developed for the intensity prediction of western North Pacific TC at 24-h, 48-h, and 72-h intervals. The datasets include Japan Meteorological Agency (JMA) Regional Specialized Meteorological Center Tokyo (RSMC Tokyo) best track data, the National Centers for Environmental Prediction (NCEP) Global Forecasting System Final analysis, the Tropical Rainfall Measuring Mission (TRMM) Microwave Imager sea surface temperature (SST), the Objectively Analyzed Air-sea Fluxes (OAflux) SLHF and TRMM Multisatellite Precipitation Analysis (TMPA) rain rate data. The models include climatology and persistence (CLIPER), a model based on Statistical Typhoon Intensity Prediction System (STIPS), which serves as the BASE model, and a model of STIPS with additional satellite estimates of IRR and SLHF (STIPER). A revised equation of TC maximum potential intensity (MPI) is derived using TMI Optimally Interpolated Sea Surface Temperature data (OISST) with higher temporal and spatial resolutions. Analysis of the resulting models indicates that the STIPER model reduces the mean absolute intensity forecast error by 6% for TC intensity forecasts out to 72 h compared to the CLIPER and BASE. Neural network models with the same predictors as STIPER can provide up to 28% error reduction compared to STIPER. The largest improvement is the intensity forecasts of the rapidly intensifying and rapidly decaying TCs. / Gao, Si. / Adviser: Long Song Willie Chiu. / Source: Dissertation Abstracts International, Volume: 73-01, Section: B, page: . / Thesis (Ph.D.)--Chinese University of Hong Kong, 2010. / Includes bibliographical references (leaves 94-105). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [201-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstract also in Chinese.
896

Power Map Explorer: uma ferramenta para visualização e previsão de vazões / Power Map Explorer: a tool to visualization and forecasting of inflow

Henderson Amparado de Oliveira Silva 24 August 2007 (has links)
A complexidade inerente ao processo de produção de energia apresenta um desafio aos especialistas quando estes se deparam com o dimensionamento e operação de sistemas de recursos hídricos. A produção energética de um sistema hidroelétrico depende fundamentalmente das séries de vazões afluentes às diversas usinas hidrelétricas do sistema. No entanto, a incerteza das vazões futuras e sua aleatoriedade são obstáculos que dificultam todo o planejamento da operação do sistema energético brasileiro. A inexistência de um software específico para análise de séries de vazões ocorridas nas usinas hidrelétricas, associada à importância desse tipo de dado no contexto energético, motivou a concepção de uma ferramenta gráafica para visualização e previsão desses dados. Acredita-se que a visualização desses dados por meio de representações apropriadas e altamente interativas possa promover hipóteses e revelar novas informações dos fenômenos associados a essas quantidades, melhorando a qualidade das decisões de planejamento do sistema energético. Este trabalho de mestrado apresenta em detalhes o sistema desenvolvido, chamado Power Map Explorer, e das técnicas nele implementadas / The complexity inherent to the process of energy production introduces a challenge to the experts when they are faced with dimension and operation of water resources systems. The energy production of a hidroeletric system depends on streamflow time series from hydroelectric plants located on different rivers of the system. However, the uncertainty and randomness of future streamflow series impose difficulties to the planning and operation of the brazilian energy system. The lack of a software suite to support the analysis of inflow series from hydroelectric plants, and the importance of this data in the energy context motivated the conception and implementation of a graphical tool to visualize and forecast this type data. The appropriate level of visualization and interaction with this type of data can spring new hypotheses and reveal new information, leading to performance improvement of the task of energetic planning. This work presents a software for visualization and forecast of inflow data series, the Power Map Explorer, in detail
897

Viabilidade logÃstica e econÃmica da distribuiÃÃo secundÃria de gÃs natural: uma abordagem metodolÃgica / Logistics and economic viability of secondary distribution of natural gas: a methodological approach

AbraÃo Ramos da Silva 04 April 2014 (has links)
CoordenaÃÃo de AperfeÃoamento de Pessoal de NÃvel Superior / This work proposes a methodology for feasibility study of the distribution of natural gas to remote areas without access through a backbone pipeline. In recent years, one can observe a strong increase in the participation of natural gas as input in energy supply all around the world, including Brazil. The State of CearÃ, in the Northeastern Brazil, shows nowadays a natural gas supply superavit of about four million cubic meters per day. Present natural gas distribution in Cearà State occurs only in Fortaleza Metropolitan area. Although there are in the State many important urban development poles with significant potential to consume natural gas they cannot count yet with necessary supply equipments of that power input as gas pipeline. This is an important problem because wood fuel is largely used in the countryside notwithstanding its damage to the environment. All over the world the attendance of secondary markets with natural gas has been supported by trucks or trains lines as a first step before implementing a pipeline. This work aims to propose and apply a methodology to find the economic and logistics feasibility to distribute natural gas to remote regions. Such a methodology makes use of discrete choice demand forecasting technique using both revealed and stated preference data as well as the capacity facility location problem modelling and conventional indicators of economic feasibility. A case study is discussed involving the CRAJUBAR region of Cearà State. The work aims to contribute in identification of scenarios in which one can have feasible situations of energy input substitution. / Esta dissertaÃÃo propÃe uma metodologia para estudo de viabilidade da distribuiÃÃo secundÃria de gÃs natural em regiÃes afastadas de redes primÃrias de gasodutos. Diante da seguranÃa de fornecimento do gÃs natural apresentada atualmente no paÃs e no Mundo, a sua participaÃÃo na matriz energÃtica vem se intensificando nos Ãltimos anos. O Estado do Cearà apresenta superavit na oferta equivalente a quatro milhÃes de metros cÃbicos por dia de gÃs. Atualmente, a distribuiÃÃo do gÃs natural, nesse Estado, à realizada apenas na RegiÃo Metropolitana de Fortaleza, sendo que no interior se encontram importantes polos de desenvolvimento, como a RegiÃo do CRAJUBAR com uma base industrial com potencial de consumo de gÃs natural, que poderia levar à substituiÃÃo do uso principalmente de lenha no processo produtivo das empresas e, tambÃm, poderia propiciar a interiorizaÃÃo do uso do energÃtico em regiÃes ainda nÃo atendida por gasodutos. O atendimento aos consumidores de gÃs natural tem ocorrido por meio da utilizaÃÃo de distribuiÃÃo secundÃria (gasoduto virtual) indutora de mercado. Assim o objetivo deste estudo reside em propor e aplicar uma metodologia de determinaÃÃo da viabilidade da distribuiÃÃo secundÃria do gÃs natural para regiÃes nÃo atendidas por gasodutos, instrumentada pelo uso de tÃcnicas de previsÃo de demanda, de otimizaÃÃo de custos e de planilha eletrÃnica na determinaÃÃo da viabilidade econÃmica. O trabalho busca contribuir na identificaÃÃo de cenÃrios viÃveis de substituiÃÃo energÃtica para o uso do gÃs natural na regiÃo em estudo.
898

Stock risk mining by news.

January 2009 (has links)
Pan, Qi. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2009. / Includes bibliographical references (leaves 70-73). / Abstract also in Chinese. / Abstract --- p.i / Acknowledgement --- p.iii / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Main Contributions --- p.5 / Chapter 1.2 --- Structure of Thesis --- p.6 / Chapter 2 --- Related Works --- p.7 / Chapter 2.1 --- Literature Review --- p.7 / Chapter 2.1.1 --- Existing Works on Bursty Feature Idenfication --- p.9 / Chapter 2.2 --- Classification --- p.9 / Chapter 2.2.1 --- Support Vector Machine --- p.9 / Chapter 2.2.2 --- Decision Tree and C4.5 Algorithm --- p.10 / Chapter 2.3 --- PageRank and HITS Algorithm --- p.10 / Chapter 2.3.1 --- PageRank --- p.11 / Chapter 2.3.2 --- HITS --- p.11 / Chapter 2.4 --- Efficient Market Hypothesis --- p.12 / Chapter 3 --- Problem Statement --- p.14 / Chapter 3.1 --- Volatility --- p.14 / Chapter 3.2 --- Financial Model --- p.15 / Chapter 3.3 --- Problem Statement --- p.16 / Chapter 4 --- Volatility V.S. Trend Prediction --- p.18 / Chapter 5 --- Bursty Volatility Features --- p.22 / Chapter 5.1 --- ADFIDF Measure --- p.24 / Chapter 5.2 --- Bursty Volatility Features --- p.28 / Chapter 5.3 --- Bursty Volatility Features Selection --- p.29 / Chapter 6 --- Volatility Ranking --- p.32 / Chapter 6.1 --- Graph Construction --- p.32 / Chapter 6.2 --- Volatility Ranking By News --- p.35 / Chapter 7 --- Volatility Index for Stock Volatility --- p.37 / Chapter 8 --- Experiments --- p.41 / Chapter 8.1 --- Experiments for Volatility Index --- p.41 / Chapter 8.1.1 --- Effectiveness of Volatility Index --- p.42 / Chapter 8.1.2 --- Information from News --- p.42 / Chapter 8.1.3 --- Information from Market --- p.45 / Chapter 8.1.4 --- Correlation Value --- p.46 / Chapter 8.1.5 --- Bursty Feature selection --- p.47 / Chapter 8.2 --- Experiments for Ranking --- p.48 / Chapter 8.2.1 --- Ranking Quality Comparison --- p.49 / Chapter 8.2.2 --- Capturing Bursty Features --- p.51 / Chapter 8.2.3 --- The Effectiveness of Feature Rank --- p.52 / Chapter 8.2.4 --- The Effectiveness of Random Walk --- p.53 / Chapter 8.2.5 --- Combination of VbN and GARCH --- p.54 / Chapter 8.2.6 --- Ranking Result Sample --- p.56 / Chapter 9 --- Conclusion --- p.58 / Chapter A --- Most Important Features for Stocks --- p.60 / Chapter B --- Correlation Matrix of Stocks --- p.63 / Chapter C --- News Index Evaluation Result Table --- p.65 / Chapter D --- Stock Data in Experiments --- p.67 / Chapter E --- Constructed Graph --- p.68 / Bibliograph --- p.70
899

Electric Power Distribution Systems: Optimal Forecasting of Supply-Demand Performance and Assessment of Technoeconomic Tariff Profile

Unknown Date (has links)
This study is concerned with the analyses of modern electric power-grids designed to support large supply-demand considerations in metro areas of large cities. Hence proposed are methods to determine optimal performance of the associated distribution networks vis-á-vis power availability from multiple resources (such as hydroelectric, thermal, wind-mill, solar-cell etc.) and varying load-demands posed by distinct set of consumers of domestic, industrial and commercial sectors. Hence, developing the analytics on optimal power-distribution across pertinent power-grids are verified with the models proposed. Forecast algorithms and computational outcomes on supply-demand performance are indicated and illustratively explained using real-world data sets. This study on electric utility takes duly into considerations of both deterministic (technological factors) as well as stochastic variables associated with the available resource-capacity and demand-profile details. Thus, towards forecasting exercise as above, a representative load-curve (RLC) is defined; and, it is optimally determined using an Artificial Neural Network (ANN) method using the data availed on supply-demand characteristics of a practical power-grid. This RLC is subsequently considered as an input parametric profile on tariff policies associated with electric power product-cost. This research further focuses on developing an optimal/suboptimal electric-power distribution scheme across power-grids deployed between multiple resources and different sets of user demands. Again, the optimal/suboptimal decisions are enabled using ANN-based simulations performed on load sharing details. The underlying supply-demand forecasting on distribution service profile is essential to support predictive designs on the amount of power required (or to be generated from single and/or multiple resources) versus distributable shares to different consumers demanding distinct loads. Another topic addressed refers to a business model on a cost reflective tariff levied in an electric power service in terms of the associated hedonic heuristics of customers versus service products offered by the utility operators. This model is based on hedonic considerations and technoeconomic heuristics of incumbent systems In the ANN simulations as above, bootstrapping technique is adopted to generate pseudo-replicates of the available data set and they are used to train the ANN net towards convergence. A traditional, multilayer ANN architecture (implemented with feed-forward and backpropagation techniques) is designed and modified to support a fast convergence algorithm, used for forecasting and in load-sharing computations. Underlying simulations are carried out using case-study details on electric utility gathered from the literature. In all, ANN-based prediction of a representative load-curve to assess power-consumption and tariff details in electrical power systems supporting a smart-grid, analysis of load-sharing and distribution of electric power on smart grids using an ANN and evaluation of electric power system infrastructure in terms of tariff worthiness deduced via hedonic heuristics, constitute the major thematic efforts addressed in this research study. / Includes bibliography. / Dissertation (Ph.D.)--Florida Atlantic University, 2019. / FAU Electronic Theses and Dissertations Collection
900

Which version of the equity market timing affects capital structure, perceived mispricing or adverse selection?

Chazi, Abdelaziz 08 1900 (has links)
Baker and Wurgler (2002) define a new theory of capital structure. In this theory capital structure evolves as the cumulative outcome of past attempts to time the equity market. Baker and Wurgler extend market timing theory to long-term capital structure, but their results do not clearly distinguish between the two versions of market timing: perceived mispricing and adverse selection. The main purpose of this dissertation is to empirically identify the relative importance of these two explanations. First, I retest Baker and Wurgler's theory by using insider trading as an alternative to market-to-book ratio to measure equity market timing. I also formally test the adverse selection model of the equity market timing: first by using post-issuance performance, and then by using three measures of adverse selection. The first two measures use estimates of adverse information costs based on the bid and ask prices, and the third measure is based on the close-to-offer returns. Based on received theory, a dynamic adverse selection model implies that higher adverse information costs lead to higher leverage. On the other hand, a naïve adverse selection model implies that negative inside information leads to lower leverage. The results are consistent with the equity market timing theory of capital structure. The results also indicate that a naïve, as opposed to a dynamic, adverse selection model seems to be the best explanation as to why managers time equity issues.

Page generated in 0.1811 seconds