501 |
Voluntary Disclosure of Earnings Forecast: A Model of Strategic Disclosure with Evidence from TaiwanChang, Wei-shuo 27 December 2010 (has links)
Starting from 2005 the disclosure of financial forecast for Taiwanese public companies has not been mandatory, firms can decide whether they want to disclose, and if so, how and when to disclose. How does the investor's reaction affect this decision? Furthermore, what is the trade-off between transparency and precision? This study develops a theoretical model in which the voluntary disclosure of earnings forecast is a double-edged sword. Such disclosure may reduce information asymmetry, but simultaneously allows entrepreneurs to hype the stock. The proposed model assumes that insiders might manipulate information and investors can learn with bounded rationality. The analytical results demonstrate that entrepreneurs may forgo earnings forecast disclosure if they can achieve greater profit under non-disclosure. In the multiperiod case, this study shows that insiders would reduce their forecast manipulation behavior due to the cost of forecast error and diminishing marginal expected profit. This study accommodates an explanation of the decrease in voluntary disclosure and the popularity of investor conferences in Taiwan. The inferences of the proposed model are examined based on forecasts issued by Taiwanese listed firms. The empirical results evidence a positive relationship between insiders¡¦ trading profit and manipulation of earnings forecast. Additionally, insiders¡¦ trading profit regarding forecast revisions is greater under voluntary disclosure than mandatory disclosure. This study offers important insights into earnings forecast policy in emerging markets.
|
502 |
Knowing mathematics for teaching: a case study of teacher responses to students' errors and difficulties in teaching equivalent fractionsDing, Meixia 15 May 2009 (has links)
The goal of this study is to align teachers’ Mathematical Knowledge for Teaching (MKT) with their classroom instruction. To reduce the classroom complexity while keeping the connection between teaching and learning, I focused on Teacher Responses to Student Errors and Difficulties (TRED) in teaching equivalent fractions with an eye on students’ cognitive gains as the assessment of teaching effects. This research used a qualitative paradigm. Classroom videos concerning equivalent fractions from six teachers were observed and triangulated with tests of teacher knowledge and personal interviews. The data collection and analysis went through a naturalistic inquiry process. The results indicated that great differences about TRED existed in different classrooms around six themes: two learning difficulties regarding critical prior knowledge; two common errors related to the learning goal, and two emergent topics concerning basic mathematical ideas. Each of these themes affected students’ cognitive gains. Teachers’ knowledge as reflected by teacher interviews, however, was not necessarily consistent with their classroom instruction. Among these six teachers, other than one teacher whose knowledge obviously lagged behind, the other five teachers demonstrated similar good understanding of equivalent fractions. With respect to the basic mathematical ideas, their knowledge and sensitivity showed differences. The teachers who understood equivalent fractions and also the basic mathematical ideas were able to teach for understanding. Based on these six teachers’ practitioner knowledge, a Mathematical Knowledge Package for Teaching (MKPT) concerning equivalent fractions was provided as a professional knowledge base. In addition, this study argued that only when teachers had knowledge bases with strong connections to mathematical foundations could they flexibly activate and transfer their knowledge (CCK and PCK) to their use of knowledge (SCK) in the teaching contexts. Therefore, further attention is called for in collaboratively cultivating teachers’ mathematical sensitivity.
|
503 |
The Use Of Emglish Prepositions In Second Language Acquisition ProcessCabuk, Sakine 01 December 2009 (has links) (PDF)
THE USE OF PREPOSITIONS IN SECOND LANGUAGE ACQUISITION PROCESS
This thesis investigates the use of three most frequent prepositions by Turkish learners of English at intermediate level of proficiency. The aim of the present study is to find out which differences between prepositions and their corresponding postpositions or case suffixes constitute difficulty on the part of second language learners. This study also examines the possible reasons of the errors/ mistakes pertaining to &lsquo / in&rsquo / , &lsquo / on&rsquo / , &lsquo / at&rsquo / by native speakers of Turkish with intermediate level of proficiency in English. To fulfill the purpose of the study the participants of the study were recorded in natural classroom environment and the recorded data were transcribed. The compiled corpus was examined by two native speakers of English and the researcher. The data was classified under four main categories for each preposition inquired: (i) correct usage, (ii) misuse (i.e., instead of &lsquo / on&rsquo / , for instance, students use &lsquo / in&rsquo / or &lsquo / at&rsquo / e.g., in television ), (iii) overuse (i.e., no preposition is required in the context but the students use one, e.g., I am going at home now), (iv) omission (i.e., a preposition is needed but the students do not use one, e.g., We go holiday). At the end of these analyses, the problematic contexts related to the use of the prepositions &lsquo / in&rsquo / , &lsquo / on&rsquo / , &lsquo / at&rsquo / for TIME and PLACE were identified. For detailed analysis of each category two tools - the Statistical Package for the Social Sciences (SPPS) and the Computerized Language Analysis Child Language Data Exchange System (CLAN CHILDES) were used. Results indicate that Turkish learners of English produce erroneous forms of prepositions in second language acquisition process and the underlying reasons of these errors/mistakes is the interference of native language.
|
504 |
Forecasting the Equity Premium and Optimal PortfoliosBjurgert, Johan, Edstrand, Marcus January 2008 (has links)
<p>The expected equity premium is an important parameter in many financial models, especially within portfolio optimization. A good forecast of the future equity premium is therefore of great interest. In this thesis we seek to forecast the equity premium, use it in portfolio optimization and then give evidence on how sensitive the results are to estimation errors and how the impact of these can be minimized.</p><p>Linear prediction models are commonly used by practitioners to forecast the expected equity premium, this with mixed results. To only choose the model that performs the best in-sample for forecasting, does not take model uncertainty into account. Our approach is to still use linear prediction models, but also taking model uncertainty into consideration by applying Bayesian model averaging. The predictions are used in the optimization of a portfolio with risky assets to investigate how sensitive portfolio optimization is to estimation errors in the mean vector and covariance matrix. This is performed by using a Monte Carlo based heuristic called portfolio resampling.</p><p>The results show that the predictive ability of linear models is not substantially improved by taking model uncertainty into consideration. This could mean that the main problem with linear models is not model uncertainty, but rather too low predictive ability. However, we find that our approach gives better forecasts than just using the historical average as an estimate. Furthermore, we find some predictive ability in the the GDP, the short term spread and the volatility for the five years to come. Portfolio resampling proves to be useful when the input parameters in a portfolio optimization problem is suffering from vast uncertainty. </p>
|
505 |
Evaluation of spelling correction and concept-based searching models in a data entry applicationNobles, Royce Anthony January 2009 (has links) (PDF)
Thesis (M.S.)--University of North Carolina Wilmington, 2009. / Title from PDF title page (February 17, 2010) Includes bibliographical references (p. 68-69)
|
506 |
The lexicon in a model of language productionStemberger, Joseph P. January 1985 (has links)
Thesis (Ph. D.)--University of California, San Diego, 1982. / Includes bibliographical references (p. 291-299).
|
507 |
Techniques for Enhancing Reliability in VLSI CircuitsHyman Jr, Ransford Morel 01 January 2011 (has links)
Reliability is an important issue in very large scale integration(VLSI) circuits. In the absence of a
focus on reliability in the design process, a circuit's functionality can be compromised. Since chips are fabricated in bulk, if reliability issues are diagnosed during the
manufacturing of the design, the faulty chips must be tossed, which reduces product yield and
increases cost. Being aware of this situation, chip designers attempt to resolve as many issues dealing with
reliability on the front-end of the design phase (architecture or system-level
modeling) to minimize the cost of errors in the design which increases as the design phase matures. Chip designers have been known to allocate a large amount of resources to reliability of a chip to maintain confidence in their product
as well as to reduce the cost due to errors found in the design.
The reliability of a design is often degraded by various causes ranging from soft errors,
electro-migration, hot carrier injection, negative bias temperature instability (NBTI), crosstalk,
power supply noise and variations in the physical design.
Given the continuing scaling down of circuit designs achievable by the advancement in technology,
the issues pertaining to reliability
have a greater impact within
the design. Given this problem along with the demand for high-performance designs, chip designers are
faced with objective to design reliable circuits, that are high performance and energy-efficient. This is especially important given the huge growth in mobile battery-operated electronic devices in
the market. In prior research, there has been significant contributions to increasing the reliability of VLSI
designs, however such techniques are often computationally expensive or power
intensive. In this dissertation, we develop a set of new techniques to generate reliable designs
by minimizing soft error, peak power and variation effects. Several techniques at the architectural level to detect soft errors with minimal performance
overhead, that make use of data, information, temporal and
spatial redundancy are proposed. The techniques are designed in such a way that much of their latency overhead
can be hidden by the latency of other functional operations. It is shown that the proposed methodologies can be implemented with negligible or minimal performance overhead
hidden by critical path operations in the datapath. In designs with large peak power values, high current spikes cause noise within the power supply creating timing issues in the circuit which
affect its functionality. A path clustering algorithm is proposed which attempts to normalize the current
draw in the circuit over the circuit's clock period by delaying the start times of certain paths.
By reducing the number of paths starting at a time instance, we reduce the amount of current drawn
from the power supply is reduced. Experimental results indicate a reduction of up to 72\% in peak power values when tested on the ISCAS '85 and
OpenCores benchmarks. Variations in VLSI designs come from process, voltage supply, and Temperature (PVT). These
variations in the design cause non-ideal behavior at random internal nodes which impacts
the timing of the design. A variation aware circuit level design methodology is presented in this dissertation in which the
architecture dynamically stretches the clock when the effect of an variation effects are observed
within the circuit during computations. While previous research efforts found are directed towards reducing variation effects, this technique offers an alternative approach to adapt dynamically to variation effects. The design technique is shown to increase in timing yield on ITC '99 benchmark circuits by an average of 41\% with negligible area overhead.
|
508 |
Efficient modeling of soft error vulnerability in microprocessorsNair, Arun Arvind 11 July 2012 (has links)
Reliability has emerged as a first class design concern, as a result of an
exponential increase in the number of transistors on the chip, and lowering of
operating and threshold voltages with each new process generation.
Radiation-induced transient faults are a significant source of soft errors in
current and future process generations. Techniques to mitigate their effect come
at a significant cost of area, power, performance, and design effort.
Architectural Vulnerability Factor (AVF) modeling has been proposed to easily
estimate the processor's soft error rates, and to enable the designers to make
appropriate cost/reliability trade-offs early in the design cycle. Using cycle-accurate
microarchitectural or logic gate-level simulations, AVF modeling captures the
masking effect of program execution on the visibility of soft errors at the
output. AVF modeling is used to identify structures in the processor that have
the highest contribution to the overall Soft Error Rate (SER) while running
typical workloads, and used to guide the design of SER mitigation mechanisms.
The precise mechanisms of interaction between the workload and the
microarchitecture that together determine the overall AVF is not well studied in
literature, beyond qualitative analyses. Consequently, there is no known
methodology for ensuring that the workload suite used for AVF modeling offers
sufficient SER coverage. Additionally, owing to the lack of an intuitive model,
AVF modeling is reliant on detailed microarchitectural simulations for
understanding the impact of scaling processor structures, or design space
exploration studies. Microarchitectural simulations are time-consuming, and do
not easily provide insight into the mechanisms of interactions between the
workload and the microarchitecture to determine AVF, beyond aggregate
statistics.
These aforementioned challenges are addressed in this dissertation by developing
two methodologies.
First, beginning with a systematic analysis of the factors affecting the occupancy of
corruptible state in a processor, a methodology is developed that
generates a synthetic workload for a given microarchitecture such that the SER
is maximized. As it is impossible for every bit in the processor to
simultaneously contain corruptible state, the worst-case realizable SER
while running a workload is less than the sum of their circuit-level fault rates.
The knowledge of the worst-case SER enables efficient design trade-offs by
allowing the architect to validate the coverage of the workload suite and select
an appropriate design point, and to identify structures that may potentially have
high contribution to SER. The methodology
induces 1.4X higher SER in the core as compared to the highest SER induced
by SPEC CPU2006 and MiBench programs.
Second, a first-order analytical model is proposed, which is developed from
the first principles of out-of-order superscalar execution that models the AVF
induced by a workload in microarchitectural structures, using inexpensive
profiling. The central component of this model is a methodology to estimate the
occupancy of correct-path state in various structures in the core. Owing to its
construction, the model provides fundamental insight into the precise mechanism
of interaction between the workload and the microarchitecture to determine AVF.
The model is used to cheaply perform
sizing studies for structures in the core, design space exploration, and workload
characterization for AVF. The model is used to quantitatively explain results
that may appear counter-intuitive from aggregate performance metrics. The Mean
Absolute Error in determining AVF of a 4-wide out-of-order superscalar processor
using model is less than 7% for each structure, and the Normalized Mean Square
Error for determining overall SER is 9.0%, as compared to cycle-accurate microarchitectural simulation. / text
|
509 |
Assertion-based repair of complex data structuresElkarablieh, Bassem H. 09 August 2012 (has links)
As software systems are growing in complexity and size, reliability becomes a major concern. A large degree of industrial and academic efforts for increasing software reliability are directed towards design, testing and validation—activities performed before the software is deployed. While such activities are fundamental for achieving high levels of confidence in software systems, bugs still occur after deployment resulting in costly software failures. This dissertation presents assertion-based repair, a novel approach for error recovery from insidious bugs that occur after the system is deployed. It describes the design and implementation of a repair framework for Java programs and evaluates the efficiency and effectiveness of the approach on repairing data structure errors in both software libraries and open-source stand-alone applications. Our approach introduces a new form of assertions, assertAndRepair, for developers to use when checking the consistency of the data structures manipulated by their programs with respect to a set of desired structural and data properties. The developer provides the properties in a Java boolean method, repOk, which returns a truth value based on whether a given data structure satisfies these properties. Upon an assertion violation due to a faulty structure, instead of terminating the execution, the structure is repaired, i.e., its fields are mutated such that the resulting structure satisfies the desired properties, and the program proceeds with its execution. To aid developers in detecting the causes of the fault, repair-logs are generated which provide useful information about the performed mutations. The repair process is performed using a novel algorithm that uses a systematic search based on symbolic execution to determine valuations for the structures’ fields that result in a valid structure. Our experiments on repairing both library data structures, as well as, stand-alone applications demonstrate the utility and efficiency of the approach in repairing large structures, enabling programs to recover from crippling errors and proceed with their executions. Assertion-based repair presents a novel post-deployment mechanism that integrates with existing and newly developed software, providing them with the defensive ability to recover from unexpected runtime errors. Programmers already understand the advantages of using assertions and are comfortable with writing them. Providing new analyses and powerful extensions for them presents an attractive direction towards building more reliable software. / text
|
510 |
Compensation-oriented quality control in multistage manufacturing processesJiao, Yibo 11 October 2012 (has links)
Significant research has been initiated recently to devise control strategies that could predict and compensate manufacturing errors using so called explicit Stream-of-Variation(SoV) models that relate process parameters in a Multistage Manufacturing Process (MMP) with product quality. This doctoral dissertation addresses several important scientific and engineering problems that will significantly advance the model-based, active control of quality in MMPs.
First, we will formally introduce and study the new concept of compensability in MMPs, analogous to the concept of controllability in the traditional control theory. The compensability in an MMP is introduced as the property denoting one’s ability to compensate the errors in quality characteristics of the workpiece, given the allocation and character of measurements and controllable tooling. The notions of “within-station” and “between-station” compensability are also introduced to describe the ability to compensate upstream product errors within a given operation or between arbitrarily selected operations, respectively.
The previous research also failed to concurrently utilize the historical and on-line measurements of product key characteristics for active model-based quality control. This dissertation will explore the possibilities of merging the well-known Run-to-Run (RtR) quality control methods with the model-based feed-forward process control methods. The novel method is applied to the problem of control of multi-layer overlay errors in lithography processes in semiconductor manufacturing. In this work, we first devised a multi-layer overlay model to describe the introduction and flow of overlay errors from one layer to the next, which was then used to pursue a unified approach to RtR and feedforward compensation of overlay errors in the wafer.
At last, we extended the existing methodologies by considering inaccurately indentified noise characteristics in the underlying error flow model. This is also a very common situation, since noise characteristics are rarely known with absolute accuracy. We formulated the uncertainty in process noise characteristics using Linear Fractional Transformation (LFT) representation and solved the problem by deriving a robust control law that guaranties the product quality even under the worst case scenario of parametric uncertainties. Theoretical results have been evaluated and demonstrated using a linear state-space model of an actual industrial process for automotive cylinder head machining. / text
|
Page generated in 0.0362 seconds