• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1851
  • 107
  • 43
  • 43
  • 43
  • 43
  • 43
  • 41
  • 11
  • 2
  • Tagged with
  • 2631
  • 2631
  • 2631
  • 1594
  • 913
  • 846
  • 831
  • 424
  • 309
  • 308
  • 241
  • 229
  • 207
  • 202
  • 195
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

Work, Family, and Community in a Triciprocal Relationship: An Exploratory Study of Enrichment

Crowder, Cindy L 01 December 2007 (has links)
The purpose of this study was to expand the field of work-life literature through the introduction of triciprocal enrichment model that examines work-, family-, and community-related support antecedents and satisfaction variables. The main objectives were to incorporate the concept of enrichment and the domain of community into the work-life research providing a more accurate portrayal of the myriad of ways that all three domains interact and affect one another. Data from 202 respondents were collected, including information on their level of community involvement, their level of enrichment within work, family, and community, their satisfaction on the job, with their family, and with their community, and the availability and usefulness of resources and support in their community, at work, and from their families. A survey instrument was designed online using the nTreePoint® Web Forms software package. Although the proposed model was rejected, this study should promote further empirical investigations of enrichment and the relationships between work, family, and community. The scale modified for this study to measure the enriching relationships between work, family, and community should be further tested and validated. The results of this study revealed that antecedents from work-life conflict literature do not produce enrichment. Therefore, research should be conducted to determine the specific factors that produce enrichment.
122

Conditional Conservatism in Accounting: New Measures and Test of Determinants of the Asymmetric Timeliness in the Recognition of Good and Bad News in Reported Earnings

Gotti, Giorgio 01 May 2007 (has links)
Accounting standards mandate different, more conservative, rules for the recognition of unrealized gains than unrealized losses in reported earnings. Conditional conservatism, defined as asymmetric timeliness in the recognition of unrealized losses vs. gains in reported earnings has, since its origins, been a peculiar characteristic of the accounting system. Understanding conservatism’s role, its determinants, and its variations across firms is important for interpreting the nature, purposes, and valuation implications of accounting. Basu (1995; 1997) proposed a model to detect accounting conditional conservatism and provided empirical evidence that bad news is recognized more quickly than good news in earnings for a sample over the period 1963-1991. Following his seminal work1, accounting literature adopted the Basu single-period model to measure conditional conservatism (Ball et al. 2000; Ball et al. 2005; Ball and Shivakumar 2005; Lobo and Zhou 2006). However, Basu’s proxy for measuring the arrival of good/bad news, the price of the firm’s stock, may be influenced, in part, by factors that will never be recorded in a firm’s reported earnings. This introduces inaccuracy in the measure of conditional conservatism. To address the problems, I introduce a new measure of conditional conservatism, which results from a Least Absolute Deviation (LAD) piecewise regression and adopts the number of changes in financial analysts’ EPS forecasts as a proxy for good/bad news. Then, I use this new measure to test the determinants, suggested by previous literature, of conditional conservatism in accounting. Results show that companies with (1) lower debt-to-assets ratio, (2) large proportion of executives’ annual compensation independent of the firm’s accounting performance, (3) one of the big 4/big 7 audit firms as auditor, and a auditor opinion qualified with a going concern assumption the previous year exhibit a greater timeliness in the recognition of bad news than good news in annual earnings. ____________________ 1As of December 7, 2006, 102 citations for Basu (1997) are recorded on Thomson ISI’s Social Sciences Citation Index (http://portal.isiknowledge.com) and 291 are on Google Scholar (http://scholar.google.com)
123

Measuring the Impact of Workplace Design on Training Transfer Relative to Other Organizational Factors

Hillsman, Terron L 01 August 2007 (has links)
This ethnographic research extends the findings of an earlier study examining the impact of workplace design on training transfer. The study triangulates data and methods of inquiry through field observation, archival records, interviews, and a survey that was developed from the interview responses. Linking the earlier, more qualitative data and analysis with the latter, more quantitative data and analysis helped to extend several theoretical considerations. Purposeful sampling was used to identify participants who held nonacademic supervisory positions at a major land grant university. Participants had attended a performance review workshop and had been applying the learned skills for at least 6 months. The findings indicate that workplace design appears to play a vital role in facilitating as well as impeding transfer for supervisory skills in this study. The present study also offers a conceptual model that proposes where workplace design fits among other organizational factors perceived to impact training transfer. The findings alert and direct organizations to where they should channel their finite resources to support training transfer and provide organizations with a better ability to differentiate critical design features from design features that are more marginal to training transfer. As a case study, organizations should not infer that these findings apply to all work settings as it may depend upon the relevancy to the particular work situation and circumstances. Methods of analysis: Domain and Taxonomic analyses, descriptive statistics, Binomial distribution, ANOVA/post hoc procedures, and hierarchical clustering.
124

Risk Management in the Post-SOX Era: Do Audit Firms Effectively Retain Clients

Hollingsworth, Carl 01 May 2007 (has links)
Since the initial disclosure of accounting irregularities at Enron in late 2001, the landscape of public company audits has undergone substantial change. These changes include the conviction of Arthur Andersen in June of 2002 and the enactment of the Sarbanes-Oxley Act of 2002. These two changes have had a significant impact on the amount of work required to issue an audit report and the number of clients that can be serviced by the remaining Big Four audit firms. While the existing literature provides us some insight on how audit firms make client acceptance/continuance decisions, almost all this literature predates SOX. I extend this literature by investigating how audit firms make client continuance decisions in the post-SOX era, whether these decisions are effective at identifying better clients, and why audit firms retain some risky clients while dismissing others. It is interesting to note that Big Four audit firms use the same basic set of criteria when making a client continuance decision in the post-SOX era, even though the processes at the firms are slightly different. My findings also indicate that the client continuance process is much more formal and rigorous post-SOX. Additionally, I find that clients who are retained by their audit firms have better subsequent financial performance than those clients who are not retained. Finally, I find that audit firms appear to overweight client size when making the client continuance decision. Specifically, it appears audit firms retain large clients who have risk profiles consistent with smaller clients they dismiss.
125

The Importance of Market Opportunity Recognition Mechanisms in Interfunctional Management Teams

Bonney Jr., Frederick Lefferts 01 August 2008 (has links)
In today‟s fast moving business environments, managers must be able to gather and interpret data in such a way as to identify lucrative market opportunities. However, being able to exploit these opportunities is contingent on management‟s ability to sense important changes in the market or see the market in a new way and ultimately craft an appropriate response to these insights. Unfortunately, this ability to identify market opportunities has not been explored in the marketing literature. Very little is known about the cognitive processes managers use as they seek out market opportunities. The purpose of this dissertation is to shed some light on these cognitive processes by developing a conceptualization of market opportunity recognition mechanisms. Specifically, market opportunity recognition mechanisms are conceptualized as a set of interrelated constructs that include management team situational awareness, management team creative problem solving and management team strategic and tactical agreement. This conceptualization is built from a thorough review of the entrepreneurship, creativity, cognitive science, and market orientation literatures as well as from insights gained from field interviews and observations. The market opportunity recognition mechanisms are tested in a nomological framework that includes a contingency based view of firm responsiveness. The test of the dissertation hypotheses was conducted using participants engaged in a dynamic market simulation. The results of the tests suggest that situational awareness is the foundational construct in market opportunity recognition mechanisms and that the interaction between situational awareness and team agreement on tactical and strategic actions increases the probability that the team will effectively align resources to market conditions. This ultimately results in increased financial performance.
126

The Lack of Consequences for Audit Committee Members Following Accounting Restatements and the Resulting Impact on Investors

Carver, Brian Todd 01 August 2008 (has links)
Prior research has assumed that financial reporting failures indicate that individual directors have provided inferior monitoring of the reporting process and has found that directors suffer the loss of board positions following reporting failures. These penalties, however, are not uniformly applied across all outside directors. Using a sample of firms that have experienced multiple reporting failures and a matched sample of non-restating firms, I collect information on individual audit committee members and investigate whether retention on the audit committee is related to the quality of the director or to the influence of the CEO over the board of directors. I then examine whether the retention of directors on the audit committee is related to further aggressive accounting practices and to additional negative consequences for investors in the long run. I find that the retention of directors on the audit committee is positively related to the quality of the director and negatively related to CEO influence over the board for both the restating and nonrestating sample. I further find that the retention of directors on the audit committee following a reporting failure is not related to future aggressive accounting practices. Test examining other long-term consequences to investors are inconclusive. Overall, these results suggest that the labor market for directors operates in an efficient and effective manner.
127

Bayesian Shrinkage Estimation and Model Selection

Armagan, Artin 01 August 2008 (has links)
We introduce a new shrinkage variable selection operator which we term Adaptive Ridge Selector (ARiS). This approach is inspired by the Relevance Vector Machine (RVM) of Tipping (2001), which uses a Bayesian hierarchical linear model to do sparse estimation. RVM was originally introduced to obtain sparse solutions in the case of kernel regression where one has many highly correlated bases (features). Extending the RVM algorithm, we include a proper prior distribution for the precisions of the regression coefficients along with a hyper-parameter to be chosen. Based upon this model, we derive the full set of conditional posterior distributions for parameters as would typically be done when applying Gibbs sampling. However, instead of simulating samples from the posterior distribution in order to estimate posterior means of quantities, we apply the Lindley-Smith mechanism (Lindley and Smith, 1972). This approach sequentially maximizes the conditional distributions, in order to find the joint maximum of the posterior distribution given the value of the hyper-parameter. An empirical Bayes method is proposed for choosing this hyperparameter leading to ARiS-eB. Having moved from a Bayes argument, we also look at the problem from a penalized least squares estimation angle. From the conventional viewpoint, the proposed method eliminates the need for combinatorial search techniques over a discreet model space, converting the model selection problem into the maximization of the marginal likelihood over a one dimensional continuous space. Close similarities exist between this estimator obtained and the lasso-type shrinkage estimators. The lasso(Tibshirani, 1996) and its variants, as will be thoroughly discussed, use 1-norm for regularization leading to sparse solutions. The estimator proposed here is contrasted with various other shrinkage estimators along with simulation studies and real data examples. Inference is also possible using a very straight forward Gibbs sampling procedure after the active variables are determined in the model. The model is also extended to handle departures from normality in the likelihood.
128

Design and Analysis of Screening Experiments Assuming Effect Sparsity

Edwards, David Joseph 01 August 2008 (has links)
Many initial experiments for industrial and engineering applications employ screening designs to determine which of possibly many factors are significant. These screening designs are usually a highly fractionated factorial or a Plackett-Burman design that focus on main effects and provide limited information for interactions. To help simplify the analysis of these experiments, it is customary to assume that only a few of the effects are actually important; this assumption is known as ‘effect sparsity’. This dissertation will explore both design and analysis aspects of screening experiments assuming effect sparsity. In 1989, Russell Lenth proposed a method for analyzing unreplicated factorials that has become popular due to its simplicity and satisfactory power relative to alternative methods. We propose and illustrate the use of p-values, estimated by simulation, for Lenth t-statistics. This approach is recommended for its versatility. Whereas tabulated critical values are restricted to the case of uncorrelated estimates, we illustrate the use of p-values for both orthogonal and nonorthogonal designs. For cases where there is limited replication, we suggest computing t-statistics and p-values using an estimator that combines the pure error mean square with a modified Lenth’s pseudo standard error. Supersaturated designs (SSDs) are designs that examine more factors than runs available. SSDs were introduced to handle situations in which a large number of factors are of interest but runs are expensive or time-consuming. We begin by assessing the null model performance of SSDs when using all-subsets and forward selection regression. The propensity for model selection criteria to overfit is highlighted. We subsequently propose a strategy for analyzing SSDs that combines all-subsets regression and permutation tests. The methods are illustrated for several examples. In contrast to the usual sequential nature of response surface methods (RSM), recent literature has proposed both screening and response surface exploration using only one three-level design. This approach is named “one-step RSM”. We discuss and illustrate two shortcomings of the current one-step RSM designs and analysis. Subsequently, we propose a new class of three-level designs and an analysis strategy unique to these designs that will address these shortcomings and aid the user in being appropriately advised as to factor importance. We illustrate the designs and analysis with simulated and real data.
129

Exploring Relational Aspects of Time-Based Competition in Supply Chains

Thomas, Rodney Wayne 01 December 2008 (has links)
In today’s evolving business environment, firms must increasingly focus on rapid adaptation, quick response, and time-based performance (Wisner et al., 2008; Eisenhardt and Martin, 2000; Barney et al., 2001). In order to remain competitive, firms are becoming time-based competitors because consumers have become more demanding. Firms now must quickly adapt, innovate, and implement new ways of serving the ever-changing preferences of customers (Dickson 1992). These changing consumer demands require firms to seek time-based sources of competitive advantage such as speed and flexibility in order to survive in hypercompetitive global markets (D'Aveni 1994; D'Aveni 1998). Time-based competition (TBC) theory formally recognizes the strategic role of time and proposes that a strategy of intense focus on shrinking the time requirements of key supply chain activities can yield a competitive advantage (Stalk Jr. and Hout 1990). One approach to becoming a time-based competitor is relational (Droge, Jayaram, and Vickery 2004). However, with the relational approach, the TBC literature provides little explanation as to how interfirm supply chain relationships are used to achieve time-based performance. Although the interfirm relationship literature is vast, it does not address relationships in an environment with an intense pressure to focus on time. At its very essence, the continuous pursuit of time-based competitive advantage may mandate increasing pressure to perform more quickly. In the pursuit of such quick response, firms may place other supply chain members under time pressure (Thomas 2008). Therefore, the purpose of this mixed methods research is to begin to explore the phenomenon of time pressure in supply chain relationships.
130

Multivariate Mixed Data Mining with Gifi System using Genetic Algorithm and Information Complexity

Katragadda, Suman 01 December 2008 (has links)
Statistical analysis is very much dependent on the quality and type of a data set. There are three types of data - continuous, categorical and mixed. Of these three types, statistical modeling on a mixed data had been a challenging job for a long time. This is due to the fact that most of the traditional statistical techniques are defined either for purely continuous data or for purely categorical data but not mixed data. In reality, most of the data sets are neither continuous nor categorical in a pure sense but are in mixed form which makes the statistical analysis quite difficult. For instance, in the medical sector where classification of the data is very important, presence of many categorical and continuous predictors results in a poor model. In the insurance and finance sectors, lots of categorical and continuous data are collected on customers for targeted marketing, detection of suspicious insurance claims, actuarial modeling, risk analysis, modeling of financial derivatives, detection of profitable zones etc. In this work, we bring together several relatively new developments in statistical model selection and data mining. In this work, we address two problems. The first problem is to determine the optimal number of mixtures from a multivariate Bernoulli distributed data using genetic algorithm and Bozdogan's information complexity, ICOMP. We show that the results of the maximum likelihood values are not just sufficient in determining the optimal number of mixtures. We also address the issue of high dimensional binary data using a genetic algorithm to determine the optimal predictors. Finally, we show the results of our algorithm on a simulated and two real data sets. The second problem is to discovering interesting patterns from a complicated mixed data set. Since mixed data are a combination of continuous and categorical variables, we trans- form the non linear categorical variables to a linear scale by a mechanism called Gifi transformation, [Gifi, 1989]. Once the non linear variables are transformed to a linear scale (Euclidean space), we apply several classical multivariate techniques on the transformed continuous data to identify the unusual patterns. The advantage with this transformation is that it has a one-to-one mapping mechanism. Hence, the transformed set of continuous value(s) in the Gifi space can be remapped to a unique set of categorical value(s) in the original space. Once the data is transformed to the Gifi space, we implement various statistical techniques to identify interesting patterns. We also address the problem of high dimensional data using genetic algorithm for variable selection and Bozdogan's information complexity (ICOMP) as our fitness function. We present details of our newly-developed Matlab toolbox, called Gifi System, that implements everything presented, and can readily be extended to add new functionality. Finally, results on both simulated and real world data sets are presented and discussed. Keywords: Gifi, homals, regression, multivariate logistic regression, fraud detection, medical diagnostics, supervised classification, unsupervised classification, variable selection, high dimensional data mining, stock market trading, detection of suspicious insurance claim estimates.

Page generated in 0.1603 seconds