• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 172
  • 9
  • 9
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 307
  • 307
  • 307
  • 122
  • 93
  • 92
  • 68
  • 65
  • 52
  • 50
  • 49
  • 47
  • 46
  • 46
  • 45
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Analysis of internet banking services for Hong Kong banking industry: the case of Hong Kong Bank

Tsui, Kin-kei, Ivan., 徐建基. January 1996 (has links)
published_or_final_version / Business Administration / Master / Master of Business Administration
82

The myth of cyberfaith

Saunders, George A. January 2002 (has links)
This study used random sample survey data from the Middletown Area Survey of 2000 to examine the use of the Internet for religious purposes. The survey data was supplemented by follow-up phone interviews with survey respondents who identified themselves as frequent users of the Internet for religious purposes. Two hypotheses were tested: the Church Dissatisfaction Hypothesis - that religious use of the Internet is positively correlated with church dissatisfaction, and the Conservative Religiosity Hypothesis - that religious use of the Internet is positively correlated with conservative religiosity. This study found no evidence for the Church Dissatisfaction Hypothesis, but did find evidence for the Conservative Religiosity Hypothesis. In fact, 80% of those who used the Internet for religious purposes fit the study's definition of conservative religiosity. / Department of Sociology
83

Online Islamic organizations and measuring Web effectiveness

Daniels, Minji 12 1900 (has links)
Approved for public release; distribution in unlimited. / Experts estimate that websites maintained by various Islamic extremists have increased to hundreds in recent years. Innovative operational capabilities enabled by Internet technology certainly pose serious challenges to U.S. counter-terrorism efforts. However, greater attention must be given to Islamic organizations that wage information campaigns, perpetuating resentment and discredit against the United States and her allies. While these sites may not openly call for violence, the sharing of common causes and goals with extremist organizations is worrisome. The repudiation of Western systems and global Islamization under the Shariah systems is often a transparent theme. The purpose of this thesis is to evaluate the effectiveness of these websites at attracting and engaging audiences to promote their cause by applying a web performance methodology commonly accepted in the commercial industry. / Lieutenant, United States Navy
84

Choosing between travel agencies and the Internet

17 April 2015 (has links)
M.A. (Tourism and Hospitality Management) / Travel agents have been traditionally seen as the key intermediary between suppliers of travel services and the traveller. Developments in the field of information technology offer consumers an alternative to booking via a travel agent - the option to plan and arrange holidays online. Due to the ever developing nature of technology, travellers have the luxury of a multitude of choices in their everyday life - particularly so in making decisions regarding travel. Travellers will seek to optimise choices by selecting a distribution channel which will provide them with the greatest perceived value. The primary goal of the study is to explore the underlying factors that influence consumer behaviour in making travel decisions, with specific reference to choosing between booking through a travel agent or online. Research with a perspective on travel decision-making in South Africa is limited. In attempting to fill the void, this study surveyed 408 respondents residing in South Africa using a structured questionnaire, and examined preference in booking holiday flights or accommodation travel through a travel agent or Internet. A literature review was embarked upon to create a framework for this study and to recognise previous research related to travel decision-making. Exploratory factor analysis was used to identify factors influencing traveller decision-making. Statistical tests, such as Chi-square and correlation tests, were further used to examine the degree of relationship and significance between items and factors. Factors that influence travel decision-making were identified, namely trust and financial risk perception, convenience and adoption of technology, price, personal contact or empathy and the role of demographic factors such as age, income and ethnicity...
85

Evidence-based medicine as a web-based information-seeking model for health care practitioners

12 January 2009 (has links)
D.Litt. et Phil. / The practice of medicine changes constantly and rapidly. Consequently, it is difficult for clinicians to learn about innovations, given the vast quantity of information available. Evidence-based medicine (EBM) is the process by which practitioners turn clinical problems into questions, and then systematically integrate personal clinical expertise with the best available external evidence as the basis for clinical decisions. To practice EBM, the practitioner is required to search the literature for relevant material, and then to synthesise knowledge and apply findings to each patient. Clinicians require fast and specific access to multiple data sources, but the availability of electronic full text documents has substantially exacerbated the lack of time to read the clinical literature owing to the demands of clinical practice, and is further compounded by the fact that the Web contains much health-related misinformation. Clinicians therefore require a means of searching the literature that will enhance the retrieval of accurate and evaluated clinical data from ranked resources, whereby the most relevant information is retrieved first from the most likely source. Strong correlations exist between four primary steps in EBM, and the formula commonly used in search strategy design in the field of information seeking. The similarities inherent in these steps suggest that an evidence-based approach to information seeking might enable endusers in the health professions to enhance their searching skills and to translate the clinical question into an appropriate information-seeking strategy. A main problem and two sub-problems were investigated, namely whether: · a Web-based EBM information-seeking model could be designed to enhance the information-seeking skills of healthcare practitioners · it was possible to design an information-seeking model more closely aligned with the clinical decision-making model familiar to healthcare practitioners · it was possible to design such a model in a manner that could further enhance the translation of the clinical question into an appropriate information-seeking strategy. Various models in medicine and the domain of information seeking were investigated. It was found that the model of the clinical decision-making process accorded with all six phases of the information-seeking process (ISP), whereas other information-seeking models only addressed the ISP from the formulation of the problem onwards, thus ignoring prior stages of initiation, selection and exploration in the ISP. A Web-based EBM information-seeking model (Model C) was devised and tested for compatibility against a general Web-based information-seeking model, and was found to be valid. Model C was further empirically assessed against a Web site design methodology, and was again found to be compatible. A unique approach to EBM information seeking is thus offered by Model C, which incorporates all aspects of the clinical-decision-making metaphor, as well as the “PICO” EBM filters (Patient/Problem, Intervention, Comparative Intervention and Outcome), into a facet analysis template for the design of a clinical search strategy. Prior to selection of the EBM information resource, Model C further allows for the ranking of each resource and for the design of individual browsing and/or analytical search strategies, as appropriate, so as to enhance EBM information seeking amongst healthcare practitioners.
86

The development of critical thinking skills through the evaluation of internet materials

Barnett, David January 2017 (has links)
Submitted in partial fulfilment of the requirements for the degree Masters in Education (Educational Technology) School of Education, Faculty of Humanities, University of the Witwatersrand, 2016 / The internet supplies a continuous stream of information to our students. The information gleaned from the internet is ever-changing and scanty and researchers have used the term “paucity” to describe internet information. It is difficult to trust this information and value it as knowledge. The need for developing Critical Thinking and its application is advanced both internationally and in South Africa .This study, investigated the development of specific critical thinking skills for the purpose of evaluating internet materials for trustworthiness. Within this study a series of lessons were designed to develop Critical Thinking skills amongst a group of Grade 11 students at a private high school in South Africa. Once these skills were acquired the students were able to make a comparison between different internet materials and they made a well-reasoned argument about the credibility of these materials. The key skills were taught through the use of a Learning Management System (LMS). The LMS was used as a medium for isolating selected internet materials and to develop a pathway of learning. Several educational theories, models and philosophies were investigated as they were layered into the fabric of this research report. Critical thinking skills were developed through a blended approach. Although a LMS was used as a primary medium of the Critical Thinking process the teacher was the key agent for its facilitation. The research premise was based on deductive reasoning and presumed that it was necessary to use Critical Thinking to search internet material for trustworthiness. The design made use of a case study as the preferred method to investigate the premise. An inductive approach was then implemented to interpret the data obtained from the evaluation of internet materials. Pre and post tests and scales were instituted and a comparison was made of the students’ confidence and ability to evaluate internet materials using specific critical thinking skills. When comparisons were made of both qualitative and quantitative results there was evidence that there was an enhancement and effective application of the specific critical thinking skills brought about through this intervention. / XL2018
87

The role of advertising and information asymmetry on firm performance

Unknown Date (has links)
Research linking marketing to financial outputs has been gaining significance in the marketing discipline. The pertinent questions are, therefore: how can marketing improve measures of firm performance and draw potential investors to the company, and where is the quantitative proof to back up these assertions? This research investigates the role of marketing expenditures in the context of initial public offerings (IPOs). The proposed theoretical framework comes from marketing and finance literature, and uses econometric models to test the hypotheses. First, we replicate the results of a previous study by Luo (2008) showing a relationship between the firm's pre-IPO marketing spending and IPO underpricing. Next, we extend the previous study by looking at the IPO's long-run returns, types of risk, analyst coverage, and market/industry characteristics. The results of this study, based on a sample of 2,103 IPOs from 1996 to 2008, suggest that increased marketing spending positively impac ts firm performance. We examine different measures of firm performance, such as risk and long-run performance, whose results are important to the firm, its shareholders, and potential investors. This study analyzes the impact marketing spending has on IPO characteristics (IPO underpricing in the short-run and cumulative abnormal returns in the long run); risk characteristics (systematic, unsystematic, bankruptcy risk, and total risk); analyst coverage characteristics (the number of analysts, optimistic coverage, and forecast error) and market characteristics (market volatility and industry type). We control for variables such as firm size, profitability, and IPO characteristics. In this paper, the results show that increased marketing spending lowers underpricing, lowers bankruptcy risk, lowers total risk, leads to greater analyst coverage, leads to more favorable analyst coverage, and lowers analyst forecast error. For theory, this paper advances the literature on the / marketing-financ e interface by extending the market-based assets and signaling theories. For practice, the results indicate that spending more money on marketing before the IPO and disclosing this information produces positive bottom-line results for the firm. KEYWORDS: Marketing-Finance, Risk, Financial Analysts, Marketing Spending, Firm Performance, Marketing Strategy Meets Wall Street, Long-Run Firm Performance, Underpricing, Stock Recommendations, Initial Public Offering, Marketing Strategy, Econometric Model. / by Monica B. Fine. / Thesis (Ph.D.)--Florida Atlantic University, 2012. / Includes bibliography. / Electronic reproduction. Boca Raton, Fla., 2012. Mode of access: World Wide Web.
88

The triangle of reflections

Unknown Date (has links)
This thesis presents some results in triangle geometry discovered using dynamic software, namely, Geometer’s Sketchpad, and confirmed with computations using Mathematica 9.0. Using barycentric coordinates, we study geometric problems associated with the triangle of reflections T of a given triangle T, yielding interesting triangle centers and simple loci such as circles and conics. These lead to some new triangle centers with reasonably simple coordinates, and also new properties of some known, classical centers. Particularly, we show that the Parry reflection point is the common point of two triads of circles, one associated with the tangential triangle, and another with the excentral triangle. More interestingly, we show that a certain rectangular hyperbola through the vertices of T appears as the locus of the perspector of a family of triangles perspective with T, and in a different context as the locus of the orthology center of T with another family of triangles. / Includes bibliography. / Thesis (M.S.)--Florida Atlantic University, 2014. / FAU Electronic Theses and Dissertations Collection
89

Information discovery from semi-structured record sets on the Web.

January 2012 (has links)
万维网(World Wide Web ,简称Web) 从上世纪九十年代出现以来在深度和广度上都得到了巨大的发展,大量的Web应用前所未有地改变了人们的生活。Web的发展形成了个庞大而有价值的信息资源,然而由于Web 内容异质性给自动信息抽取所造成的困难,这个信息源并没有被充分地利用。因此, Web信息抽取是Web信息应用过程中非常关键的一环。一般情况下,一个网页用来描述一个单独的对象或者一组相似的对象。例如,关于某款数码相机的网页描述了该相机的各方面特征,而一个院系的教授列表则描述了一组教授的基本信息。相应地, Web信息抽取可以分为两大类,即面向单个对象细节的信息抽取和面向组对象记录的信息抽取。本文集中讨论后者,即从单的网页中抽取组半结构化的数据记录。 / 本文提出了两个框架来解决半结构化数据记录的抽取问题。首先介绍一个基于数据记录切分树的框架RST 。该框架中提出了个新的搜索结构即数据记录切分树。基于所设计的搜索策略,数据记录切分树可以有效地从网页中抽取数据记录。在数据记录切分树中,对应于可能的数据记录的DOM子树组是在搜索过程中动态生成的,这使得RST框架比已有的方法更具灵活性。比如在MDR和DEPTA 中, DOM子树组是根据预定义的方式静态生成的,未能考虑当前数据记录区域的特征。另外, RST框架中提出了一个基于"HTML Token" 单元的相似度计算方法。i衷方法可以综合MDR中基于字符串编辑距离的方法之优点和DEPTA 中基于树结构编辑距离的方法之优点。 / 很多解决数据记录抽取问题的已有方法(包括RST框架)都需要预定义若干硬性的条件,并且他们通过遍历DOM树结构来在一个网页中穷举搜索可能存在的数据记录区域。这些方法不能很好地处理大量的含有复杂数据记录结构的网页。因此,本文提出了第二个解决框架Skoga。 Skoga框架由一个DOM结构知识驱动的模型和一个记录切分树模型组成。Skoga框架可以对DOM结构进行全局的分析,进而实现更加有效的、鲁棒的记录识别。DOM结构知识包含DOM 背景知识和DOM统计知识。前者描述DOM结构中的一些逻辑关系,这些关系对DOM 的逻辑结构进行限制。而后者描述一个DOM节点或者一组DOM节点的特点,由一组经过巧妙设计的特征(Feature) 来表示。特征的权重是由参数估计算法在一个开发数据集上学习得到的。基于面向结构化输出的支持向量机( Structuredoutput Support Vector Machine) 模型,本参数估计算法可以很好地处理DOM节点之间的依赖关系。另外,本文提出了一个基于分治策略的优化方法来搜索一个网页的最优化记录识别。 / 最后,本文提出了一个利用半结构化数据记录来进行维基百科类目(Wikipedia Category) 扩充的框架。該框架首先从某个维基百科类目中获取几个已有的实体(Entity) 作为种子,然后利用这些种子及其信息框(Infobox) 中的属性来从Web上发掘更多的同一类目的实体及其属性信息。该框架的一个特点是它利用半结构化的数据记录来进行新实体和属性的抽取,而这些半结构化的数据记录是通过自动的方法从Web上获取的。该框架提出了一个基于条件随机场(Conditional Random Fields) 的半监督学习模型来利用有限的标注样本进行目标信息抽取。这个半监督学习模型定义了一个记录相似关系图来指导学习过程,从而利用大量非标注样本来获得更好的信息抽取效果。 / The World Wide Web has been extensively developed since its first appearance two decades ago. Various applications on theWeb have unprecedentedly changed humans' life. Although the explosive growth and spread of the Web have resulted in a huge information repository, yet it is still under-utilized due to the difficulty in automated information extraction (IE) caused by the heterogeneity of Web content. Thus, Web IE is an essential task in the utilization of Web information. Typically, a Web page may describe either a single object or a group of similar objects. For example, the description page of a digital camera describes different aspects of the camera. On the contrary, the faculty list page of a department presents the information of a group of professors. Corresponding to the above two types, Web IE methods can be broadly categorized into two classes, namely, description details oriented extraction and object records oriented extraction. In this thesis, we focus on the later task, namely semi-structured data record extraction from a single Web page. / In this thesis, we develop two frameworks to tackle the task of data record extraction. We first present a record segmentation search tree framework in which a new search structure, named Record Segmentation Tree (RST), is designed and several efficient search pruning strategies on the RST structure are proposed to identify the records in a given Web page. The subtree groups corresponding to possible data records are dynamically generated in the RST structure during the search process. Therefore, this framework is more exible compared with existing methods such as MDR and DEPTA that have a static manner of generating subtree groups. Furthermore, instead of using string edit distance or tree edit distance, we propose a token-based edit distance which takes each DOM node as a basic unit in the cost calculation. / Many existing methods, including the RST framework, for data record extraction from Web pages contain pre-coded hard criteria and adopt an exhaustive search strategy for traversing the DOM tree. They fail to handle many challenging pages containing complicated data records and record regions. In this thesis, we also present another framework Skoga which can perform robust detection of different kinds of data records and record regions. Skoga, composed of a DOM structure knowledge driven detection model and a record segmentation search tree model, can conduct a global analysis on the DOM structure to achieve effective detection. The DOM structure knowledge consists of background knowledge as well as statistical knowledge capturing different characteristics of data records and record regions as exhibited in the DOM structure. Specifically, the background knowledge encodes some logical relations governing certain structural constraints in the DOM structure. The statistical knowledge is represented by some carefully designed features that capture different characteristics of a single node or a node group in the DOM. The feature weights are determined using a development data set via a parameter estimation algorithm based on structured output Support Vector Machine model which can tackle the inter-dependency among the labels on the nodes of the DOM structure. An optimization method based on divide and conquer principle is developed making use of the DOM structure knowledge to quantitatively infer the best record and region recognition. / Finally, we present a framework that can make use of the detected data records to automatically populate existing Wikipedia categories. This framework takes a few existing entities that are automatically collected from a particular Wikipedia category as seed input and explores their attribute infoboxes to obtain clues for the discovery of more entities for this category and the attribute content of the newly discovered entities. One characteristic of this framework is to conduct discovery and extraction from desirable semi-structured data record sets which are automatically collected from the Web. A semi-supervised learning model with Conditional Random Fields is developed to deal with the issues of extraction learning and limited number of labeled examples derived from the seed entities. We make use of a proximate record graph to guide the semi-supervised leaning process. The graph captures alignment similarity among data records. Then the semisupervised learning process can leverage the benefit of the unlabeled data in the record set by controlling the label regularization under the guidance of the proximate record graph. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Bing, Lidong. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2012. / Includes bibliographical references (leaves 114-123). / Abstract also in Chinese. / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Web Era and Web IE --- p.1 / Chapter 1.2 --- Semi-structured Record and Region Detection --- p.3 / Chapter 1.2.1 --- Problem Setting --- p.3 / Chapter 1.2.2 --- Observations and Challenges --- p.5 / Chapter 1.2.3 --- Our Proposed First Framework - Record Segmentation Tree --- p.9 / Chapter 1.2.4 --- Our Proposed Second Framework - DOM Structure Knowledge Oriented Global Analysis --- p.10 / Chapter 1.3 --- Entity Expansion and Attribute Acquisition with Semi-structured Data Records --- p.13 / Chapter 1.3.1 --- Problem Setting --- p.13 / Chapter 1.3.2 --- Our Proposed Framework - Semi-supervised CRF Regularized by Proximate Graph --- p.15 / Chapter 1.4 --- Outline of the Thesis --- p.17 / Chapter 2 --- Literature Survey --- p.19 / Chapter 2.1 --- Semi-structured Record Extraction --- p.19 / Chapter 2.2 --- Entity Expansion and Attribute Acquisition --- p.23 / Chapter 3 --- Record Segmentation Tree (RST) Framework --- p.27 / Chapter 3.1 --- Overview --- p.27 / Chapter 3.2 --- Record Segmentation Tree --- p.29 / Chapter 3.2.1 --- Basic Record Segmentation Tree --- p.29 / Chapter 3.2.2 --- Slimmed Segmentation Tree --- p.30 / Chapter 3.2.3 --- Utilize RST in Record Extraction --- p.31 / Chapter 3.3 --- Search Pruning Strategies --- p.33 / Chapter 3.3.1 --- Threshold-Based Top k Search --- p.33 / Chapter 3.3.2 --- Complexity Analysis --- p.35 / Chapter 3.3.3 --- Composite Node Pruning --- p.37 / Chapter 3.3.4 --- More Challenging Record Region Discussion --- p.37 / Chapter 3.4 --- Similarity Measure --- p.41 / Chapter 3.4.1 --- Encoding Subtree with Tokens --- p.42 / Chapter 3.4.2 --- Tandem Repeat Detection and Distance-based Measure --- p.42 / Chapter 4 --- DOM Structure Knowledge Oriented Global Analysis (Skoga) Framework --- p.45 / Chapter 4.1 --- Overview --- p.45 / Chapter 4.2 --- Design of DOM Structure Knowledge --- p.49 / Chapter 4.2.1 --- Background Knowledge --- p.49 / Chapter 4.2.2 --- Statistical Knowledge --- p.51 / Chapter 4.3 --- Finding Optimal Label Assignment --- p.54 / Chapter 4.3.1 --- Inference for Bottom Subtrees --- p.55 / Chapter 4.3.2 --- Recursive Inference for Higher Subtree --- p.57 / Chapter 4.3.3 --- Backtracking for the Optimal Label Assignment --- p.59 / Chapter 4.3.4 --- Second Optimal Label Assignment --- p.60 / Chapter 4.4 --- Statistical Knowledge Acquisition --- p.62 / Chapter 4.4.1 --- Finding Feature Weights via Structured Output SVM Learning --- p.62 / Chapter 4.4.2 --- Region-oriented Loss --- p.63 / Chapter 4.4.3 --- Cost Function Optimization --- p.65 / Chapter 4.5 --- Record Segmentation and Reassembling --- p.66 / Chapter 5 --- Experimental Results of Data Record Extraction --- p.68 / Chapter 5.1 --- Evaluation Data Set --- p.68 / Chapter 5.2 --- Experimental Setup --- p.70 / Chapter 5.3 --- Experimental Results on TBDW --- p.73 / Chapter 5.4 --- Experimental Results on Hybrid Data Set with Nested Region --- p.76 / Chapter 5.5 --- Experimental Results on Hybrid Data Set with Intertwined Region --- p.78 / Chapter 5.6 --- Empirical Case Studies --- p.79 / Chapter 5.6.1 --- Case Study One --- p.80 / Chapter 5.6.2 --- Case Study Two --- p.83 / Chapter 6 --- Semi-supervised CRF Regularized by Proximate Graph --- p.85 / Chapter 6.1 --- Overview --- p.85 / Chapter 6.2 --- Semi-structured Data Record Set Collection --- p.88 / Chapter 6.3 --- Semi-supervised Learning Model for Extraction --- p.89 / Chapter 6.3.1 --- Proximate Record Graph Construction --- p.91 / Chapter 6.3.2 --- Semi-Markov CRF and Features --- p.94 / Chapter 6.3.3 --- Posterior Regularization --- p.95 / Chapter 6.3.4 --- Inference with Regularized Posterior --- p.97 / Chapter 6.3.5 --- Semi-supervised Training --- p.97 / Chapter 6.3.6 --- Result Ranking --- p.98 / Chapter 6.4 --- Derived Training Example Generation --- p.99 / Chapter 6.5 --- Experiments --- p.100 / Chapter 6.5.1 --- Experiment Setting --- p.100 / Chapter 6.5.2 --- Entity Expansion --- p.103 / Chapter 6.5.3 --- Attribute Extraction --- p.107 / Chapter 7 --- Conclusions and Future Work --- p.110 / Chapter 7.1 --- Conclusions --- p.110 / Chapter 7.2 --- Future Work --- p.112 / Bibliography --- p.113
90

An integrated trading environment: to improve transparency and efficiency of financial information transmission. / CUHK electronic theses & dissertations collection / ProQuest dissertations and theses

January 2002 (has links)
Currently, many systems improve transparency and efficiency in the transmission and utilization of financial information. For example, Electronic Data Gathering And Retrieval (EDGAR), which is used by the Securities Exchange Commission (SEC) in USA is an electronic filing system for listed companies to submit financial documents. After validation, those documents are disseminated electronically to investors. Securities Markets Automated Research Training and Surveillance (SMARTS), developed in Australia, is a market monitoring system, which detects market behavior, trading patterns, and aberrant trading activities. Also, many financial information retrieval and analysis systems support searching, retrieval, and analysis of financial information, one such example is BOOM. Boom collects, organizes, analyzes and delivers financial information to their subscribers. However, these systems are designed to support vertical perspective of market information transmission, which may be duplicated and error prone, and can be collaborated to provide a more complete view of information for market monitoring and investment decision. / This dissertation is aimed to integrate the financial information in a horizontal perspective---an Integrated Trading Environment is proposed which has five systems to provide the needed functionality. In which, two surveys were conducted on the adoption of on-line trading by both investors and brokers in order to develop strategies for launching the online trading system. To cooperate with the strategies, a knowledge-based financial information infrastructure was proposed in the new trading environment. With the support of proposed Financial Information to Knowledge Transforming Model (FIKTM), a XML-based data integration architecture was constructed to improve the market transparency and efficiency. / Lau Sau Mui. / "August 2002." / Source: Dissertation Abstracts International, Volume: 63-10, Section: B, page: 4873. / Supervisor: Jerome Yen. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2002. / Includes bibliographical references (p. 210-219). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. Ann Arbor, MI : ProQuest dissertations and theses, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstracts in English and Chinese. / School code: 1307.

Page generated in 0.0693 seconds