• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 476
  • 159
  • 49
  • 47
  • 46
  • 38
  • 35
  • 30
  • 22
  • 8
  • 6
  • 6
  • 5
  • 4
  • 4
  • Tagged with
  • 1088
  • 1088
  • 260
  • 150
  • 130
  • 125
  • 124
  • 116
  • 97
  • 95
  • 90
  • 87
  • 84
  • 84
  • 82
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

The relationship between information frequency and financial distress prediction

Hung, Chia-ching 20 June 2007 (has links)
This thesis is based on the stock listing electronic companies in TSE and OTC. There are two purposes of this paper. First, to understand what the difference between failure and non-failure firms under financial factors and corporate governance indicators. And second, to compare with the different material frequency, the predictive ability and the correlation regarding the enterprise crisis reveals the variable whether has a difference. The experiment results show that: By independent-sample t test and logistic regression, we find that under the quarterly financial statements, the profit index is the most manifest factor and the next is debt ratio. The closer to the time of the distress, the more factors in operating efficiency that make the two kinds of the firms differ. Financial distress firms have the higher account receivable turnover rate. In corporate governance factors, the proportion of family members holdings and the rate of directors¡¦ shareholding are the most two important variables. The results from yearly financial reports are similar to which from quarterly financial statements. Profit index and liquidity index can be the prior indications to judge whether a firm gets financial crisis or not. In independent-sample t test, the cash flow from operation ratio and times interest earned are marked variables in the first and second year before bankrupt. The diversity of traditional financial index and the corporate governance variables between failure firms and normal firms are very obvious in the first year previous to failure. In corporate governance factors, the proportion of family members holdings and the extent of the shares as collateral by the board of directors are the most important variables. Regardless of yearly or quarterly financial statements, the closer to the time of the distress, the more different variables appear. The average percentage of correctly classified firms is 80.13% from the 8th to 5th quarter previous to the distress better than 2nd year previous to the distress. Compared with the average accurate prediction rate from the 4th to 1st quarter, the predicting ability from 1st yearly financial statement is better. But the 1st and 2nd accurate rate are 92.54% and 93.44%, the average is 93%. In other words, we can overcome the time lag and raise the predictive ability by using quarterly reports rather than yearly financial statements.
42

Nesting ecology of dickcissels on reclaimed surface-mined lands in Freestone County, Texas

Dixon, Thomas Pingul 17 February 2005 (has links)
Surface mining and subsequent reclamation often results in the establishment of large areas of grassland that can benefit wildlife. Grasslands have declined substantially over the last 150 years, resulting in declines of many grassland birds. The dickcissel (Spiza americana), a neotropical migrant, is one such bird whose numbers have declined in the last 30 years due to habitat loss, increased nest predation and parasitism, and over harvest (lethally controlled as an agricultural pest on its wintering range in Central and South America). Reclaimed surface-mined lands have been documented to provide important breeding habitat for dickcissels in the United States, emphasizing the importance of reclamation efforts. Objectives were to understand specific aspects of dickcissel nesting ecology (i.e., nest-site selection, nest success, and nest parasitism, and identification of nest predators) on 2 spatial scales on TXU Energy’s Big Brown Mine, near Fairfield, Texas, and to subsequently provide TXU Energy with recommendations to improve reclaimed areas as breeding habitat for dickcissels. I examined the influence of nest-site vegetation characteristics and the effects of field-level spatial factors on dickcissel nesting ecology on 2 sites reclaimed as wildlife habitat. Additionally, I developed a novel technique to identify predators at active nests during the 2003 field season. During 2002–2003, 119 nests were monitored. On smaller spatial scales, dickcissels were likely to select nest-sites with low vegetation, high densities of bunchgrasses and tall forbs, and areas with higher clover content. Probability of nest success increased with nest heights and vegetation heights above the nest, characteristics associated with woody nesting substrates. Woody nesting substrates were selected and bunchgrasses were avoided. Oak (Quercus spp.) saplings remained an important nesting substrate throughout the breeding season. On a larger scale, nest-site selection was likely to occur farther from wooded riparian areas and closer to recently-reclaimed areas. Nest parasitism was likely to occur near roads and wooded riparian areas. Results suggest reclaimed areas could be improved by planting more bunchgrasses, tall forbs (e.g., curly-cup gumweed [Grindelia squarrosa] and sunflower [Helianthus spp.]), clover (Trifolium spp.), and oaks (a preferred nesting substrate associated with higher survival rates). Larger-scale analysis suggests that larger tracts of wildlife areas should be created with wooded riparian areas comprising a minimal portion of a field’s edge.
43

Logistic regression models for predicting trip reporting accuracy in GPS-enhanced household travel surveys

Forrest, Timothy Lee 25 April 2007 (has links)
This thesis presents a methodology for conducting logistic regression modeling of trip and household information obtained from household travel surveys and vehicle trip information obtained from global positioning systems (GPS) to better understand the trip underreporting that occurs. The methodology presented here builds on previous research by adding additional variables to the logistic regression model that might be significant in contributing to underreporting, specifically, trip purpose. Understanding the trip purpose is crucial in transportation planning because many of the transportation models used today are based on the number of trips in a given area by the purpose of a trip. The methodology used here was applied to two study areas in Texas, Laredo and Tyler-Longview. In these two study areas, household travel survey data and GPS-based vehicle tracking data was collected over a 24-hour period for 254 households and 388 vehicles. From these 254 households, a total of 2,795 trips were made, averaging 11.0 trips per household. By comparing the trips reported in the household travel survey with those recorded by the GPS unit, trips not reported in the household travel survey were identified. Logistic regression was shown to be effective in determining which household- and trip-related variables significantly contributed to the likelihood of a trip being reported. Although different variables were identified as significant in each of the models tested, one variable was found to be significant in all of them - trip purpose. It was also found that the household residence type and the use of household vehicles for commercial purposes did not significantly affect reporting rates in any of the models tested. The results shown here support the need for modeling trips by trip purpose, but also indicate that, from urban area to urban area, there are different factors contributing to the level of underreporting that occurs. An analysis of additional significant variables in each urban area found combinations that yielded trip reporting rates of 0%. Similar to the results of Zmud and Wolf (2003), trip duration and the number of vehicles available were also found to be significant in a full model encompassing both study areas.
44

Social class and National identity in Taiwan

Lin, Hung-Wen 02 February 2008 (has links)
none
45

Forming of Enterprise's Crisis and Building the Crisis Forecasting Models

Su, Chin-hui 15 June 2009 (has links)
Due to the global competition, the survival of enterprises must face the major test. Since the poor management of the market will increase number of companies, so the crisis early warning model of business has the necessary to establish. The cause of financial crisis is the main source of financial situation of the deteriorating. Therefore, if we could analysis the facets and weights of potential affect factors through the financial and managerial situation of business to judge the crisis cause of a corporation and establish the early warning model is worth to discuss deeply. The precious year of companies¡¦ data that this study collecting are from the Taiwan Securities Exchange 2006/01/01-2008/12/31 which have been out of the open security market based on the analysis standards and omitting the less information and banking, have total 36 enterprises data for analysis. The application of total variables, this study pre-adapts the TEJ business credit risk indicators to integrate the documentation and analysis the fundamental variables. It can be seen that all the factors have the relationship with each other through this study. This highlights a very important message, and also to the crisis among the factors and normal company with a considerable fluctuations. The judging results of DEA-DA model show that most of the company might be affected with some important factors of interpretation in abnormal situation to let the company in crisis cluster. Through Logistic regression analysis results show that our study forecasting model has the great explanatory power to meet the behavior of interpretation with the crisis and normal companies. By the enterprises crisis model of this study building to assess and forecast the crisis situation have the same results with the simplified model constructing with key factors to affect the original model direction. This study shows a very important fact that the crisis forecasting models will not be simplified to change the outcome which also indicating to increase of variables won¡¦t change the results of the assessment. In accordance with this study proposed model, if value positive that would be show more and more vulnerable to crisis. By other words, if value negative that would be more small vulnerable to crisis.
46

Important factors in predicting detection probabilities for radiation portal monitors

Tong, Fei, 1986- 12 November 2010 (has links)
This report analyzes the impact of some important factors on the prediction of detection probabilities for radiation portal monitors (RPMs). The application of innovative detection technology to improve operational sensitivity of RPMs has received increasing attention in recent decades. In particular, two alarm algorithms, gross count and energy windowing, have been developed to try to distinguish between special nuclear material (SNM) and naturally occurring radioactive material (NORM). However, the use of the two detection strategies is quite limited due to a very large number of unpredictable threat scenarios. We address this problem by implementing a new Monte Carlo radiation transport simulation approach to model a large set of threat scenarios with predefined conditions. In this report, our attention is focused on the effect of two important factors on the detected energy spectra in RPMs, the mass of individual nuclear isotopes and the thickness of shielding materials. To study the relationship between these factors and the resulting spectra, we apply several advanced statistical regression models for different types of data, including a multinomial logit model, an ordinal logit model, and a curvilinear regression model. By utilizing our new simulation technique together with these sophisticated regression models, we achieve a better understanding of the system response under various conditions. We find that the different masses of the isotopes change the isotopes’ effect on the energy spectra. In analyzing the joint impact of isotopes’ mass and shielding thickness, we obtain a nonlinear relation between the two factors and the gross count of gamma photons in the energy spectrum. / text
47

A study of courteous behavior on the University of Texas campus

Lu, Zhou, 1978- 22 February 2011 (has links)
This study focused on measuring courteous behavior on the University of Texas at Austin (UT) students on campus. This behavior was measured through analyzing various factors involved when a person opened the door for another. The goal was to determine which factors would significantly affect the probability that a person would hold a door for another. Three UT buildings with no automatic doors were selected (RLM, FAC and GRE), and 200 pairs of students at each location were observed to see whether they would open doors for others. These subjects were not disturbed during the data collection process. For each observation, the door holding conditions, genders, position (whether it was the one who opened the door or the recipient of this courteous gesture, abbreviated as recipient), distance between the person opening the door and the recipient, and the number of recipients were recorded. Descriptive statistics and logistic regression were used to analyze the data. The results showed that the probability of people opening the doors for others was significantly affected by gender, position, distance between the person opening the door and the recipient, the number of recipients, and the interaction term between gender and position. The study revealed that men had a slightly higher propensity of opening the doors for the recipients. The odds for men were a multiplicative factor of 1.09 of that for women on average, holding all other factors constant. However, women had much higher probability of having doors held open for them. The odds for men were a multiplicative factor of 0.55 of that for women on average, holding all other factors constant. In terms of the distance between the person opening the door and the recipient, for each meter increase in distance, the odds that the door would be held open would decrease by a multiplicative factor of 0.40 on average. Additionally, for each increase in number of recipients, the odds that the door would be held open would increase by a multiplicative factor of 1.32 on average. / text
48

Analysis of Longitudinal Data in the Case-Control Studies via Empirical Likelihood

Jian, Wen 09 June 2006 (has links)
The case-control studies are primary tools for the study of risk factors (exposures) related to the disease interested. The case-control studies using longitudinal data are cost and time efficient when the disease is rare and assessing the exposure level of risk factors is difficult. Instead of GEE method, the method of using a prospective logistic model for analyzing case-control longitudinal data was proposed and the semiparametric inference procedure was explored by Park and Kim (2004). In this thesis, we apply an empirical likelihood ratio method to derive limiting distribution of the empirical likelihood ratio and find one likelihood-ratio based confidence region for the unknown regression parameters. Our approach does not require estimating the covariance matrices of the parameters. Moreover, the proposed confidence region is adapted to the data set and not necessarily symmetric. Thus, it reflects the nature of the underlying data and hence gives a more representative way to make inferences about the parameter of interest. We compare empirical likelihood method with normal approximation based method, simulation results show that the proposed empirical likelihood ratio method performs well in terms of coverage probability.
49

Evaluation of logistic regression and random forest classification based on prediction accuracy and metadata analysis

Wålinder, Andreas January 2014 (has links)
Model selection is an important part of classification. In this thesis we study the two classification models logistic regression and random forest. They are compared and evaluated based on prediction accuracy and metadata analysis. The models were trained on 25 diverse datasets. We calculated the prediction accuracy of both models using RapidMiner. We also collected metadata for the datasets concerning number of observations, number of predictor variables and number of classes in the response variable.     There is a correlation between performance of logistic regression and random forest with significant correlation of 0.60 and confidence interval [0.29 0.79]. The models appear to perform similarly across the datasets with performance more influenced by choice of dataset rather than model selection.     Random forest with an average prediction accuracy of 81.66% performed better on these datasets than logistic regression with an average prediction accuracy of 73.07%. The difference is however not statistically significant with a p-value of 0.088 for Student's t-test.     Multiple linear regression analysis reveals none of the analysed metadata have a significant linear relationship with logistic regression performance. The regression of logistic regression performance on metadata has a p-value of 0.66. We get similar results with random forest performance. The regression of random forest performance on metadata has a p-value of 0.89. None of the analysed metadata have a significant linear relationship with random forest performance.     We conclude that the prediction accuracies of logistic regression and random forest are correlated. Random forest performed slightly better on the studied datasets but the difference is not statistically significant. The studied metadata does not appear to have a significant effect on prediction accuracy of either model.
50

Practical aspects of kernel smoothing for binary regression and density estimation

Signorini, David F. January 1998 (has links)
This thesis explores the practical use of kernel smoothing in three areas: binary regression, density estimation and Poisson regression sample size calculations. Both nonparametric and semiparametric binary regression estimators are examined in detail, and extended to two bandwidth cases. The asymptotic behaviour of these estimators is presented in a unified way, and the practical performance is assessed using a simulation experiment. It is shown that, when using the ideal bandwidth, the two bandwidth estimators often lead to dramatically improved estimation. These benefits are not reproduced, however, when two general bandwidth selection procedures described briefly in the literature are applied to the estimators in question. Only in certain circumstances does the two bandwidth estimator prove superior to the one bandwidth semiparametric estimator, and a simple rule-of-thumb based on robust scale estimation is suggested. The second part summarises and compares many different approaches to improving upon the standard kernel method for density estimation. These estimators all have asymptotically 'better' behaviour than the standard estimator, but a small-sample simulation experiment is used to examine which, if any, can give important practical benefits. Very simple bandwidth selection rules which rely on robust estimates of scale are then constructed for the most promising estimators. It is shown that a particular multiplicative bias-correcting estimator is in many cases superior to the standard estimator, both asymptotically and in practice using a data-dependent bandwidth. The final part shows how the sample size or power for Poisson regression can be calculated, using knowledge about the distribution of covariates. This knowledge is encapsulated in the moment generating function, and it is demonstrated that, in most circumstances, the use of the empirical moment generating function and related functions is superior to kernel smoothed estimates.

Page generated in 0.0723 seconds