Spelling suggestions: "subject:"ser forminformation"" "subject:"ser informationation""
11 |
創新產品開發之概念生成─以Eee PC和Wii為例張秉生 Unknown Date (has links)
隨著台灣資訊科技系統大廠往自有品牌方向發展,接受委託代工製造業務之營運比重減少,自有品牌產品業務比重逐漸增加,然而發展自有品牌與替他人代工製造服務的新產品開發模式有所差異,自有品牌產品的產品規格與概念多半需由廠商自行定義,若只是參考他人之產品規格發展新產品,則生產出來的產品無特殊性,消費者的品牌印象與品牌認同也不強,並不符合發展自有品牌的目標;而若觀察全球資訊科技的品牌大廠,SONY與Apple皆有其成名代表作,代表作品皆為自行定義產品規格與概念的創新產品,而如何發想創新產品的概念遂成為本研究的研究目的。
近年來台灣電腦公司華碩,與日本遊戲機公司任天堂相繼在2006年及2007年推出足以對公司品牌印象貢獻良多的代表產品,遂成為本研究的研究對象,藉由對兩家公司的質性個案研究,蒐集與整理次級資料,並佐以公司訪談彌補次級資料不足的部分,嘗試找出這兩家雖屬不同產業的公司,在什麼市場環境背景下,新產品開法團隊如何發想新產品概念。而於研究結果發現,個案公司發起新產品開發專案皆有改變市場競爭狀況的動機,並期望用新產品創造新的市場,藉由在先進入新市場獲得優勢而爭強自己的市場競爭力;且當時市場上既有產品的某項屬性有性能過度提供的情形,因此個案公司跳脫產業對該產品屬性原有的技術路徑規畫,及性能與功能的追求,以使用者為中心,參考市場上既有產品的產品屬性及市場反應,與產品開發者獲取的使用者資訊發想產品概念。雖初期沒有外部使用者介入,但由於產品開發者同時具有使用者的身分,加上產品開發者平時對於使用者生活型態與行為多有觀察,或是在新產品開發後期,將測試原型提供給一般使用者試用,從此兩方面陸續取得使用者資訊;另外,個案公司若沒有強烈「與別人不同」的企業文化,獨立於原有組織主流運作方式與文化,且由跨部門成員組成的新產品開發團隊,將有助於產品創新。 / As Taiwan's information technology systems manufacturers to develop its own brand business, and the proportion of the OEM & ODM business gradually decreased as the proportion of own-brand business increased, but the way to develop new products in own-brand business is quite different from the way in OEM business, a innovative own-brand manufacturers usually generate the new product concepts or specifications by itself to make differentiation from other competitors. If someone just follow the concepts or specifications written on the orders by clients or imitating what others have done before, the new products are non- uniqueness to consumers, and firms can’t build strong consumer brand image and brand identity, does not comply with the objectives of the development of own-brand. Taking a look at global brand IT firms like SONY and Apple, we can find that they have their representative products, and the concepts and specifications of products are generated and defined by firms their self. Thus, how to make concepts of innovative products becomes the purpose of this study.
In recent years, Taiwan's computer company ASUS launched its new product in 2007,and the Japanese gamesole company Nintendo launched its new product in 2006, both products were successful and made a great impact on the market, contributed so much to their brand image, and became each firm’s representative product. So these two products are qualified to be the cases of this study. For these two cases, the use of qualitative case study approach, through the collection of secondary data and interviews with company information to make up for lack of secondary data, trying to find out the new product development team of these two companies how to generate concepts of new product and the relation between new product development project and market environment at that time.
The result of this study indicate that two companies both have motivations to change their market competition, want to create new markets then enter new markets through earlier access to get advantage and enforce market competitiveness. Some dimension of products already on the market in technology performance oversupply situation at that time, so the case companies are user-centered rather than following the industry usual technology planning path of pursuing performance and function, taking the consumer responses and concepts of products already on market as reference and the user information obtained by new product developer to generate product concepts. Although there is no user involved in the beginning of new product development, developer also can get user information because they also represent a kind of user, observe user daily lifestyle and even hold a user test with prototype in the later of new product development. Furthermore, the study results also indicate that if there is no strong corporate culture like “like no others”, a new product development team which is independent of the mainstream mode of operation of the existing organization and culture, and with cross-department members will contribute to product innovation.
|
12 |
Rate Distortion Theory for Causal Video Coding: Characterization, Computation Algorithm, Comparison, and Code DesignZheng, Lin January 2012 (has links)
Due to the sheer volume of data involved, video coding is an important application of lossy source coding, and has received wide industrial interest and support as evidenced by the development and success of a series of video coding standards. All MPEG-series and H-series video coding standards proposed so far are based upon a video coding paradigm called predictive video coding, where video source frames Xᵢ,i=1,2,...,N, are encoded in a frame by frame manner, the encoder and decoder for each frame Xᵢ, i =1, 2, ..., N, enlist help only from all previous encoded frames Sj, j=1, 2, ..., i-1.
In this thesis, we will look further beyond all existing and proposed video coding standards,
and introduce a new coding paradigm called causal video coding, in which the encoder for each frame Xᵢ
can use all previous original frames Xj, j=1, 2, ..., i-1, and all previous
encoded frames Sj, while the corresponding decoder can use only all
previous encoded frames. We consider all studies, comparisons, and designs on causal video coding
from an information theoretic
point of view.
Let R*c(D₁,...,D_N) (R*p(D₁,...,D_N), respectively)
denote the minimum total rate required to achieve a given distortion
level D₁,...,D_N > 0 in causal video coding (predictive video coding, respectively).
A novel computation
approach is proposed to analytically characterize, numerically
compute, and compare the
minimum total rate of causal video coding R*c(D₁,...,D_N)
required to achieve a given distortion (quality) level D₁,...,D_N > 0.
Specifically, we first show that for jointly stationary and ergodic
sources X₁, ..., X_N, R*c(D₁,...,D_N) is equal
to the infimum of the n-th order total rate distortion function
R_{c,n}(D₁,...,D_N) over all n, where
R_{c,n}(D₁,...,D_N) itself is given by the minimum of an
information quantity over a set of auxiliary random variables. We
then present an iterative algorithm for computing
R_{c,n}(D₁,...,D_N) and demonstrate the convergence of the
algorithm to the global minimum. The global convergence of the
algorithm further enables us to not only establish a single-letter
characterization of R*c(D₁,...,D_N) in a novel way when the
N sources are an independent and identically distributed (IID)
vector source, but also demonstrate
a somewhat surprising result (dubbed the more and less coding
theorem)---under some conditions on source frames and distortion,
the more frames need to be encoded and transmitted, the less amount
of data after encoding has to be actually sent.
With the help of the algorithm, it is also shown by example that
R*c(D₁,...,D_N) is in general much smaller than the total rate
offered by the traditional greedy coding method by which each frame
is encoded in a local optimum manner based on all information
available to the encoder of the frame.
As a by-product, an extended Markov lemma is
established for correlated ergodic sources.
From an information theoretic point of view,
it is interesting to compare causal
video coding and predictive video coding,
which all existing video
coding standards proposed so far are based upon.
In this thesis, by fixing N=3,
we first derive a single-letter characterization
of R*p(D₁,D₂,D₃) for an IID
vector source (X₁,X₂,X₃) where X₁ and X₂ are independent, and then demonstrate the existence of such X₁,X₂,X₃ for which R*p(D₁,D₂,D₃)>R*c(D₁,D₂,D₃) under some conditions on source frames and distortion. This result makes causal video coding an attractive framework for future video coding systems and standards.
The design of causal video coding is also considered in the thesis from an information
theoretic perspective by modeling each frame as a stationary information source.
We first put forth a concept called causal scalar quantization, and then
propose an algorithm for designing optimum fixed-rate causal scalar quantizers
for causal video coding to minimize the total distortion among all sources.
Simulation results show that in comparison with fixed-rate predictive scalar quantization,
fixed-rate causal scalar quantization offers as large as 16% quality improvement (distortion reduction).
|
13 |
Rate Distortion Theory for Causal Video Coding: Characterization, Computation Algorithm, Comparison, and Code DesignZheng, Lin January 2012 (has links)
Due to the sheer volume of data involved, video coding is an important application of lossy source coding, and has received wide industrial interest and support as evidenced by the development and success of a series of video coding standards. All MPEG-series and H-series video coding standards proposed so far are based upon a video coding paradigm called predictive video coding, where video source frames Xᵢ,i=1,2,...,N, are encoded in a frame by frame manner, the encoder and decoder for each frame Xᵢ, i =1, 2, ..., N, enlist help only from all previous encoded frames Sj, j=1, 2, ..., i-1.
In this thesis, we will look further beyond all existing and proposed video coding standards,
and introduce a new coding paradigm called causal video coding, in which the encoder for each frame Xᵢ
can use all previous original frames Xj, j=1, 2, ..., i-1, and all previous
encoded frames Sj, while the corresponding decoder can use only all
previous encoded frames. We consider all studies, comparisons, and designs on causal video coding
from an information theoretic
point of view.
Let R*c(D₁,...,D_N) (R*p(D₁,...,D_N), respectively)
denote the minimum total rate required to achieve a given distortion
level D₁,...,D_N > 0 in causal video coding (predictive video coding, respectively).
A novel computation
approach is proposed to analytically characterize, numerically
compute, and compare the
minimum total rate of causal video coding R*c(D₁,...,D_N)
required to achieve a given distortion (quality) level D₁,...,D_N > 0.
Specifically, we first show that for jointly stationary and ergodic
sources X₁, ..., X_N, R*c(D₁,...,D_N) is equal
to the infimum of the n-th order total rate distortion function
R_{c,n}(D₁,...,D_N) over all n, where
R_{c,n}(D₁,...,D_N) itself is given by the minimum of an
information quantity over a set of auxiliary random variables. We
then present an iterative algorithm for computing
R_{c,n}(D₁,...,D_N) and demonstrate the convergence of the
algorithm to the global minimum. The global convergence of the
algorithm further enables us to not only establish a single-letter
characterization of R*c(D₁,...,D_N) in a novel way when the
N sources are an independent and identically distributed (IID)
vector source, but also demonstrate
a somewhat surprising result (dubbed the more and less coding
theorem)---under some conditions on source frames and distortion,
the more frames need to be encoded and transmitted, the less amount
of data after encoding has to be actually sent.
With the help of the algorithm, it is also shown by example that
R*c(D₁,...,D_N) is in general much smaller than the total rate
offered by the traditional greedy coding method by which each frame
is encoded in a local optimum manner based on all information
available to the encoder of the frame.
As a by-product, an extended Markov lemma is
established for correlated ergodic sources.
From an information theoretic point of view,
it is interesting to compare causal
video coding and predictive video coding,
which all existing video
coding standards proposed so far are based upon.
In this thesis, by fixing N=3,
we first derive a single-letter characterization
of R*p(D₁,D₂,D₃) for an IID
vector source (X₁,X₂,X₃) where X₁ and X₂ are independent, and then demonstrate the existence of such X₁,X₂,X₃ for which R*p(D₁,D₂,D₃)>R*c(D₁,D₂,D₃) under some conditions on source frames and distortion. This result makes causal video coding an attractive framework for future video coding systems and standards.
The design of causal video coding is also considered in the thesis from an information
theoretic perspective by modeling each frame as a stationary information source.
We first put forth a concept called causal scalar quantization, and then
propose an algorithm for designing optimum fixed-rate causal scalar quantizers
for causal video coding to minimize the total distortion among all sources.
Simulation results show that in comparison with fixed-rate predictive scalar quantization,
fixed-rate causal scalar quantization offers as large as 16% quality improvement (distortion reduction).
|
14 |
Personalised ontology learning and mining for web information gatheringTao, Xiaohui January 2009 (has links)
Over the last decade, the rapid growth and adoption of the World Wide Web has further exacerbated user needs for e±cient mechanisms for information and knowledge location, selection, and retrieval. How to gather useful and meaningful information from the Web becomes challenging to users. The capture of user information needs is key to delivering users' desired information, and user pro¯les can help to capture information needs. However, e®ectively acquiring user pro¯les is di±cult. It is argued that if user background knowledge can be speci¯ed by ontolo- gies, more accurate user pro¯les can be acquired and thus information needs can be captured e®ectively. Web users implicitly possess concept models that are obtained from their experience and education, and use the concept models in information gathering. Prior to this work, much research has attempted to use ontologies to specify user background knowledge and user concept models. However, these works have a drawback in that they cannot move beyond the subsumption of super - and sub-class structure to emphasising the speci¯c se- mantic relations in a single computational model. This has also been a challenge for years in the knowledge engineering community. Thus, using ontologies to represent user concept models and to acquire user pro¯les remains an unsolved problem in personalised Web information gathering and knowledge engineering. In this thesis, an ontology learning and mining model is proposed to acquire user pro¯les for personalised Web information gathering. The proposed compu- tational model emphasises the speci¯c is-a and part-of semantic relations in one computational model. The world knowledge and users' Local Instance Reposito- ries are used to attempt to discover and specify user background knowledge. From a world knowledge base, personalised ontologies are constructed by adopting au- tomatic or semi-automatic techniques to extract user interest concepts, focusing on user information needs. A multidimensional ontology mining method, Speci- ¯city and Exhaustivity, is also introduced in this thesis for analysing the user background knowledge discovered and speci¯ed in user personalised ontologies. The ontology learning and mining model is evaluated by comparing with human- based and state-of-the-art computational models in experiments, using a large, standard data set. The experimental results are promising for evaluation. The proposed ontology learning and mining model in this thesis helps to develop a better understanding of user pro¯le acquisition, thus providing better design of personalised Web information gathering systems. The contributions are increasingly signi¯cant, given both the rapid explosion of Web information in recent years and today's accessibility to the Internet and the full text world.
|
15 |
Os estudos de usuários nos programas de pós-graduação em ciência da informação do NordesteSimões, Angélica Clementino 05 February 2014 (has links)
Made available in DSpace on 2015-04-16T15:23:30Z (GMT). No. of bitstreams: 1
arquivototal.pdf: 1418081 bytes, checksum: 44b2e6f6bdf95d68fb87e6e451d92349 (MD5)
Previous issue date: 2014-02-05 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES / The user studies have to focus on the use and behavior of people in relation to information. However, a number of studies with the theme user study " the same matter user information" without constitute an empirical study of groups of subjects taken from a relation of necessity, search and use information . Thus, this study aimed to map the academic production area Members Information Studies in the Graduate Program in Information Science Northeastern: Federal University of Bahia - UFBA, Federal University of Paraíba - UFPB and Federal University of Pernambuco - UFPE. Nature of quantitative and qualitative and descriptive, cut out as the universe studied 186 dissertations, which were analyzed from the following categories: type of users studied, worked thematic, often mentioned in the literature of approaches and methods and techniques of collecting and analyzing data. From the results obtained, it was observed that only 25 dissertations, a percentage of 13.44 % of searches made in PPGCI s NE constitute user study. These, mostly used alternative approaches, configured in exploratory and / or descriptive, nature of quantitative and qualitative research. As regards the instruments and data collection techniques, most made use predominantly of mixed questionnaire. So, can say that the research provided a deepening on user studies in the Graduate Program in Information Science in the Northeast and the results presented the practice of research and teaching in the disciplines of undergraduate and graduate in Information Science can be enriched. / Os estudos de usuários possuem como enfoque o uso e o comportamento das pessoas em relação à informação. No entanto, várias das pesquisas realizadas com a temática estudo de usuários tratam do tema usuários da informação sem se constituir em um estudo empírico realizado com grupos de sujeitos tomados a partir de uma relação de necessidade, de busca e de uso da informação. Dessa forma, esta pesquisa teve como objetivo mapear a produção acadêmica da área de Estudos de Usuários da Informação nos Programas de Pós-Graduação em Ciência da Informação do Nordeste: Universidade Federal da Bahia (UFBA), Universidade Federal da Paraíba (UFPB) e Universidade Federal de Pernambuco (UFPE). De natureza quanti-qualitativa e tipo descritiva, recortou como universo estudado 186 dissertações, as quais foram analisadas a partir das seguintes categorias: tipo de usuários estudados, temáticas trabalhadas, frequencia das abordagens apontadas na literatura, e os métodos e técnicas de coleta e análise de dados. A partir dos resultados obtidos, evidenciou-se que apenas 25 dissertações, um percentual de 13,44%, das pesquisas realizadas nos PPGCI s NE se constituem em estudo de usuário. Estas, em sua maioria, utilizaram as abordagens alternativas, se configurando em pesquisas exploratórias e/ou descritivas, de natureza quanti-qualitativa. No que se refere aos instrumentos e técnicas de coleta, a maioria fez uso, predominantemente, do questionário misto. Assim, pode-se afirmar que a pesquisa proporcionou um aprofundamento sobre os estudos de usuários nos Programas de Pós-Graduação em Ciência da Informação do Nordeste, e espera-se que, com os resultados apresentados, as práticas de pesquisa e de ensino nas disciplinas dos cursos de graduação e nos de pós-graduação em Ciência da Informação possam ser enriquecidas.
|
16 |
教育單位導入ITIL之分析與評估-以T學校為例 / The Analysis and Evaluation of Implementing ITIL for Education Institute - A Case Study of T shcool黃世榮 Unknown Date (has links)
「企業電子化」在網路企業時代中成了提高核心競爭力的主要關鍵。學校在全球化浪潮及少子化的衝擊下,也正處於日益競爭的教育市場中。而長久以來,一直存在IT服務品質不佳的問題, 1980年代晚期英國政府因此開發出一套全面性服務管理標準,ITIL。
曾有學者指出,大學生對學校經驗不滿意者共占51.4%,而其中一項「校園資訊服務」滿意度倒數第三。綜觀資訊服務管理的演進與校園e化的發展,本研究想了解使用者注重的服務有哪些,最需先改善哪些項目。
透過單一個案研究法,對個案學校進行資料蒐集,從分析資料中可以發現,資訊服務的提供者與使用者在滿意度上存在相當程度的落差。最後視資訊服務提供者現況並參考ITIL機制,給予資訊服務改善的建議,並適度調整服務等級,以逐步建立資訊服務整體規劃。
|
17 |
Determinação de caminhos mínimos em aplicações de transporte público: um estudo de caso para a cidade de Porto AlegreBastos, Rodrigo 27 September 2013 (has links)
Submitted by William Justo Figueiro (williamjf) on 2015-07-21T22:37:51Z
No. of bitstreams: 1
63c.pdf: 2699232 bytes, checksum: 1ae2013ef31101508f9fef3997d71790 (MD5) / Made available in DSpace on 2015-07-21T22:37:51Z (GMT). No. of bitstreams: 1
63c.pdf: 2699232 bytes, checksum: 1ae2013ef31101508f9fef3997d71790 (MD5)
Previous issue date: 2013 / SIMTUR - Sistema Inteligente De Monitoramento de Tráfego Urbano / O crescente aumento do uso de automóveis e de motocicletas tem provocado uma contínua degradação no trânsito urbano das grandes metrópoles. Este cenário é agravado pelas deficiências nos atuais sistemas de transporte público, geradas, em parte, pela falta de informação ao usuário. O presente trabalho apresenta um modelo computacional para um sistema de informação ao usuário de transporte público. Ao contrário de outros trabalhos baseados no algoritmo clássico Dijkstra, a abordagem apresentada faz uso do algoritmo A* para resolução do problema de caminhos mínimos, presente neste contexto, a fim de reduzir o tempo de resposta de maneira que o modelo possa ser utilizado em um sistema real de informação ao usuário. O modelo proposto considera múltiplos critérios de decisão, como a distância total percorrida e o número de transbordos. Um estudo de caso foi realizado utilizando dados reais do transporte público da cidade Porto Alegre com o objetivo de avaliar o modelo computacional desenvolvido. Os resultados gerados foram comparados com aqueles obtidos através do emprego do algoritmo Dijkstra e indicam que a combinação do algoritmo A* com técnicas de aceleração permite reduzir, significativamente, a complexidade de espaço, o tempo de processamento e o número de transbordos. / The increasing use of automobiles and motorcycles has caused a continuous degradation in the traffic of large cities. This scenario gets worse due to shortcomings in the current public transportation, which is entailed, in a certain way, by the lack of information provided to the user. This study shows a computing model for a public transportation user information system. Unlike other studies based on the classical Dijkstra’s algorithm, the approach makes use of the algorithm A* to solve a shortest path problem to reduce the response time so that the model can be used in an real-time web information system. The proposed model takes into account multiple criteria of decision, such as total distance traveled and number of transfers and it was evaluated with data from Porto Alegre’s public transportation. The results were compared to those ones obtained by the use of Dijkstra’s algorithm and indicate that the combination of algorithm A* with acceleration techniques allows reducing significantly the space complexity, processing time and the number of transfers.
|
18 |
數位產品網路行銷之顧客資訊滿意度衡量模式 / The Measurement Model of Customer Information Satisfaction for Internet Marketing of Digital Products王怡舜, Yi-Shun Wang Unknown Date (has links)
資訊管理文獻中,對於電子商務環境的顧客資訊滿意度衡量模式甚少探討。目前文獻中的使用者資訊滿意度(UIS),以及使用者自建系統滿意度(EUCS)兩種衡量模式,主要是適用於「傳統資料處理環境」或是「使用者自建系統環境」。因此,本研究將發展一個適用於「數位產品網路行銷環境」的顧客資訊滿意度衡量模式。首先,本研究探討了網站顧客資訊滿意度(WCIS)的概念性定義,作者從文獻中歸納出初步的顧客資訊滿意度衡量構面與問項,並透過訪談、焦點群體、先導研究等方法來加以補充與調整。再者,本研究也說明了量表問項產生過程、資料蒐集方法、以及純化測量的步驟。作者並運用兩個配額樣本來進行探索性因素分析以及驗證性因素分析,其中嚴謹地檢驗了顧客資訊滿意度衡量模式的信度、內容效度、效標關聯效度、收歛效度、區別效度,以及法理效度。最後,本研究探討了顧客資訊滿意度衡量模式在實務界與學術界的應用方式,並討論了本研究所面臨的若干限制,同時提出一些未來可以進一步研究的方向。作者希望本研究所提出的顧客資訊滿意度衡量模式,未來可以被其他研究人員用來發展網路行銷或電子商務理論。
CHAPTER 1. INTRODUCTION….………………………………………….….…1
CHAPTER 2. DOMAIN OF WEB CUSTOMER INFORMATION
SATISFACTION ……………………………………………………………………..4
2.1 The Focus of This Study…………………………………………….…..…….4
2.2 The Impact of E-commerce on the Business Process of DPSPs….……….…..5
2.3 Instruments for Measuring User Information Satisfaction (UIS) and
End-User Computing Satisfaction (EUCS)…….……………..……………….6
2.4 The Conceptual Definition of Web Customer Information Satisfaction ….…..9
2.5 The Theoretical Framework for Assessing WCIS……………………….…..12
2.6 Service Quality versus Customer Satisfaction………………………….……14
CHAPTER 3. GENERATION OF SCALE ITEMS……….……………………...16
3.1 Generation of Initial Item List……………………………………………...16
3.2 Pilot Study…………………………………………………………………....17
CHAPTER 4. SCALE PURIFICATION AND EXPLORATORY
FACTOR ANALYSIS…...…………………………………………………………..19
4.1 Sample and Procedure……………………………………………………….19
4.2 Item Analysis and Reliability Estimates……………………………………..20
4.3 Identifying the Factor Structure of the WCIS Construct…..…………………21
4.4 Reliability…………………………………………………………………….23
4.5 Content Validity………………………………………………………………25
4.6 Criterion-Related Validity……………………………………………………25
4.7 Reliability and Criterion-Related Validity by Type of Web Site……………..26
4.8 Discriminant and Convergent Validity……………………………………….27
4.9 Nomological Validity………………………………………………………...28
CHAPTER 5. THE CONFIRMATORY FACTOR ANALYSIS OF THE
WCIS INSTRUMENT………………………………………………...……………30
5.1 Need for Confirmatory Analysis………………………………………….….30
5.2 Methods……………………………………………………………………....33
5.3 Data Collection for Confirmatory Analysis………………………………….41
5.4 Alternative Models…………………………………………………..……….43
5.5 Criteria for Comparing Model-Data Fit…..………………………………….46
5.6 Checks for Statistical Assumptions………………………………………….49
5.7 Estimation Method…………………………………………………………...50
5.8 Results………………………………………………………………………..50
5.9 Assessment of Reliability and Validity………………………………………55
5.10 The Measurement of Service Quality……………………………………….60
5.10.1 The Development of SERVQUAL and IS-adapted SERVQUAL………61
5.10.2 Refinement of an EC-adapted SERVQUAL…………………………...66
5.11 Research Findings for Confirmatory Analysis……………………………...72
5.11.1 Findings for Question One……………………………………………72
5.11.1 Findings for Question Two……………………………………………74
5.12 Comparison of Underlying Dimensions Between UIS, EUCS
and WCIS…………………………………………………………………..76
CHAPTER 6. IMPLICATIONS………….………………………………………..77
6.1 Implications for Practice……………………………………………….....….77
6.2 Implications for Research.……………………………………………………79
CHAPTER 7. CONCLUSION……..………………………………………………81
REFERENCE……………...………………………………………………………..83
GLOSSARY…………………………………………………………………………95
APPENDIX A Measurement of Web Customer Information Satisfaction –
Forty-Three Items Used in the Pilot Study…………………………………………...97
APPENDIX B Observed Correlation Matrix of WCIS Instrument in
Confirmatory Analysis..……………………………………………………………....99
APPENDIX C Observed Correlation Matrix of Initial EC-SERVQUAL
Instrument…………….……………………...………………………………………100
APPENDIX D The LISREL Program for WCIS Model 1……..…………………...101
APPENDIX E The LISREL Program for WCIS Model 2.……….………………...102
APPENDIX F The LISREL Program for WCIS Model 3...………………………...104
APPENDIX G The LISREL Program for WCIS Model 4..………………………...105
APPENDIX H The LISREL Program for EC-SERVQUAL Model 1.…………….106
APPENDIX I The LISREL Program for EC-SERVQUAL Model 2.…………….107
APPENDIX J The LISREL Program for The Structural Model Between
WCIS and EC-SERVQUAL Measures………………………………..………………...108
ABOUT THE AUTHOR…………………………………………………………...110 / MIS literature has not addressed the measurement of web customer information satisfaction (WCIS) in electronic commerce. Current models for measuring user information satisfaction (UIS) and end-user computing satisfaction (EUCS) are perceived as inapplicable as they are targeted primarily towards either conventional data processing or the end-user computing environment. This study develops a comprehensive model and instrument for measuring customer information satisfaction for web sites that market digital products and services. This paper first discusses the concepts and definitions of WCIS construct from the literature. The researcher summarizes his findings in a theoretical framework. Based on this framework, the researcher develops a measurement instrument to measure web customer information satisfaction. The procedures used in generating items, collecting data, and purifying a multiple-item scale are described. The researcher has carefully examined evidences of reliability, content validity, criterion-related validity, convergent validity, discriminant validity, and nomological validity by analyzing data from two quota samples. The potential applications for practitioners and researchers are then explored. Finally, the researcher concludes this study by discussing limitations and potential future research. The researcher hopes that the proposed WCIS instrument with good reliability and validity can be used by other researchers to develop and test Internet marketing and EC theories in the future.
|
Page generated in 0.1045 seconds