1 |
COTS System Implementation: Diagnosis Decision of The Misfit AnalysisKuan, Pei-min 02 August 2008 (has links)
Commercial off-the-shelf systems (COTS) such as enterprise resource planning (ERP) systems are becoming mature technologies to support inter- and intra-company business processes even in intermediate and small organizations. However, such systems are complex and expensive. The decision to install a COTS necessitates a choice of mechanisms to determine whether it fits firm¡¦s requirements. This paper presents a misfit diagnosis decision principle grounded on the COTS misfit analysis methodology proposed by Wu, et al., 2008. We propose a systematic diagnosis decision principle to add on to the misfit analysis methodology by providing support to decision-making in customization or Business Process Engineering (BPR).
Our research contributes to the misfit analysis methodology by proposing a systematic diagnosis decision principle that will help both software vendors and organizations when the misfits between firm requirements and COTS functionality are encountered. The results indicate that with this approach, organizations can more easily and systematically determine whether the misfits should be customized or conducted BPR. They also help to evaluate the efforts needed for COTS customization and business process reengineering for each misfit and thereby help to support decision making in customization or BPR, thereby reducing the risk in implementing COTS.
|
2 |
A Methodology and DSS for ERP Misfit AnalysisShin, Shin-shing 27 May 2007 (has links)
Commercial off-the-shelf enterprise resource planning (ERP) systems have been adopted by large companies to support their inter- and intra-business processes. Midsize market firms are now investing in ERP systems too. However, research has indicated that about three quarters of attempted ERP projects turned out to be unsuccessful. A common problem encountered in adopting ERP software has been the issue of fit or alignment.
This paper presents an ERP misfit analysis methodology, grounded on the task-technology fit theory and cognitive fit theory, for measuring misfits between ERP candidates and enterprise¡¦s requirements in ex-ante implementation. A decision support system prototype embedded the approach has been developed. A usability evaluation is performed on the prototype to demonstrate the approach. With this approach, organizations can more easily and systematically determine where the misfits are and the degree of misfits, thereby reducing the risks in implementing ERP systems. Our research contributes to the practical solution of the problem of misfit analysis.
|
3 |
Towards optimal measurement and theoretical grounding of L2 English elicited imitation: Examining scales, (mis)fits, and prompt features from item response theory and random forest approachesJi-young Shin (11560495) 14 October 2021 (has links)
<p>The present dissertation investigated
the impact of scales / scoring methods and prompt linguistic features on the
meausrement quality of L2 English elicited imitation (EI). Scales / scoring
methods are an important feature for the validity and reliabilty of L2 EI test,
but less is known (Yan et al., 2016). Prompt linguistic features are also known
to influence EI test quaity, particularly item difficulty, but item
discrimination or corpus-based, fine-grained meausres have rarely been incorporated
into examining the contribution of prompt linguistic features. The current
study addressed the research needs, using item response theory (IRT) and random
forest modeling.</p><p>Data consisted of 9,348 oral responses
to forty-eight items, including EI prompts, item scores, and rater comments, which
were collected from 779 examinees of an L2 English EI test at Purdue
Universtiy. First, the study explored the current and alternative EI scales / scoring
methods that measure grammatical / semantic accuracy, focusing on optimal IRT-based
measurement qualities (RQ1 through RQ4 in Phase Ⅰ). Next, the project
identified important prompt linguistic features that predict EI item difficulty
and discrimination across different scales / scoring methods and proficiency, using
multi-level modeling and random forest regression (RQ5 and RQ6 in Phase
Ⅱ).</p><p>The main findings were
(although not limited to): 1) collapsing exact repetition and paraphrase
categories led to more optimal measurement (i.e., adequacy of item parameter values, category
functioning, and model / item / person fit) (RQ1); there were fewer misfitting
persons with lower proficiency and higher frequency of unexpected responses in
the extreme categories (RQ2); the inconsistency of qualitatively distinguishing
semantic errors and the wide range of grammatical accuracy in the minor error
category contributed to misfit (RQ3); a quantity-based, 4-category ordinal
scale outperformed quality-based or binary scales (RQ4); sentence length
significantly explained item difficulty only, with small variance explained
(RQ5); Corpus-based lexical measures and
phrase-level syntactic complexity were important to predicting item difficulty,
particularly for the higher ability level. The findings made implications for
EI scale / item development in human and automatic scoring settings and L2
English proficiency development.</p>
|
Page generated in 0.0692 seconds