• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 19681
  • 3372
  • 2417
  • 2015
  • 1551
  • 1432
  • 880
  • 406
  • 390
  • 359
  • 297
  • 237
  • 208
  • 208
  • 208
  • Tagged with
  • 38199
  • 12468
  • 9257
  • 7120
  • 6700
  • 5896
  • 5305
  • 5203
  • 4739
  • 3459
  • 3306
  • 2832
  • 2729
  • 2545
  • 2117
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
421

Exploratory Visualization of Data with Variable Quality

Huang, Shiping 11 January 2005 (has links)
Data quality, which refers to correctness, uncertainty, completeness and other aspects of data, has became more and more prevalent and has been addressed across multiple disciplines. Data quality could be introduced and presented in any of the data manipulation processes such as data collection, transformation, and visualization. Data visualization is a process of data mining and analysis using graphical presentation and interpretation. The correctness and completeness of the visualization discoveries to a large extent depend on the quality of the original data. Without the integration of quality information with data presentation, the analysis of data using visualization is incomplete at best and can lead to inaccurate or incorrect conclusions at worst. This thesis addresses the issue of data quality visualization. Incorporating data quality measures into the data displays is challenging in that the display is apt to be cluttered when faced with multiple dimensions and data records. We investigate both the incorporation of data quality information in traditional multivariate data display techniques as well as develop novel visualization and interaction tools that operate in data quality space. We validate our results using several data sets that have variable quality associated with dimensions, records, and data values.
422

The implementation of a subset data dictionary verifier

Cline, Jacquelyn Fern January 2010 (has links)
Typescript (photocopy). / Digitized by Kansas Correctional Industries
423

System development and its effect on management in planning for the use of electronic data processing equipment

Hokansson, Nils C.I. January 1962 (has links)
Thesis (M.B.A.)--Boston University
424

Development of an optical system for dynamic evaluation of phase recovery algorithms

Palani, Ananta January 2015 (has links)
No description available.
425

Automated identification of digital evidence across heterogeneous data resources

Mohammed, Hussam J. January 2018 (has links)
Digital forensics has become an increasingly important tool in the fight against cyber and computer-assisted crime. However, with an increasing range of technologies at people's disposal, investigators find themselves having to process and analyse many systems with large volumes of data (e.g., PCs, laptops, tablets, and smartphones) within a single case. Unfortunately, current digital forensic tools operate in an isolated manner, investigating systems and applications individually. The heterogeneity and volume of evidence place time constraints and a significant burden on investigators. Examples of heterogeneity include applications such as messaging (e.g., iMessenger, Viber, Snapchat, and WhatsApp), web browsers (e.g., Firefox and Google Chrome), and file systems (e.g., NTFS, FAT, and HFS). Being able to analyse and investigate evidence from across devices and applications in a universal and harmonized fashion would enable investigators to query all data at once. In addition, successfully prioritizing evidence and reducing the volume of data to be analysed reduces the time taken and cognitive load on the investigator. This thesis focuses on the examination and analysis phases of the digital investigation process. It explores the feasibility of dealing with big and heterogeneous data sources in order to correlate the evidence from across these evidential sources in an automated way. Therefore, a novel approach was developed to solve the heterogeneity issues of big data using three developed algorithms. The three algorithms include the harmonising, clustering, and automated identification of evidence (AIE) algorithms. The harmonisation algorithm seeks to provide an automated framework to merge similar datasets by characterising similar metadata categories and then harmonising them in a single dataset. This algorithm overcomes heterogeneity issues and makes the examination and analysis easier by analysing and investigating the evidential artefacts across devices and applications based on the categories to query data at once. Based on the merged datasets, the clustering algorithm is used to identify the evidential files and isolate the non-related files based on their metadata. Afterwards, the AIE algorithm tries to identify the cluster holding the largest number of evidential artefacts through searching based on two methods: criminal profiling activities and some information from the criminals themselves. Then, the related clusters are identified through timeline analysis and a search of associated artefacts of the files within the first cluster. A series of experiments using real-life forensic datasets were conducted to evaluate the algorithms across five different categories of datasets (i.e., messaging, graphical files, file system, internet history, and emails), each containing data from different applications across different devices. The results of the characterisation and harmonisation process show that the algorithm can merge all fields successfully, with the exception of some binary-based data found within the messaging datasets (contained within Viber and SMS). The error occurred because of a lack of information for the characterisation process to make a useful determination. However, on further analysis, it was found that the error had a minimal impact on subsequent merged data. The results of the clustering process and AIE algorithm showed the two algorithms can collaborate and identify more than 92% of evidential files.
426

Fast fingerprint verification using sub-regions of fingerprint images.

January 2004 (has links)
Chan Ka Cheong. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2004. / Includes bibliographical references (leaves 77-85). / Abstracts in English and Chinese. / Chapter 1. --- Introduction --- p.1 / Chapter 1.1 --- Introduction to Fingerprint Verification --- p.1 / Chapter 1.1.1 --- Biometrics --- p.1 / Chapter 1.1.2 --- Fingerprint History --- p.2 / Chapter 1.1.3 --- Fingerprint characteristics --- p.4 / Chapter 1.1.4 --- A Generic Fingerprint Matching System Architecture --- p.6 / Chapter 1.1.5 --- Fingerprint Verification and Identification --- p.8 / Chapter 1.1.7 --- Biometric metrics --- p.10 / Chapter 1.2 --- Embedded system --- p.12 / Chapter 1.2.1 --- Introduction to embedded systems --- p.12 / Chapter 1.2.2 --- Embedded systems characteristics --- p.12 / Chapter 1.2.3 --- Performance evaluation of a StrongARM processor --- p.13 / Chapter 1.3 --- Objective -An embedded fingerprint verification system --- p.16 / Chapter 1.4 --- Organization of the Thesis --- p.17 / Chapter 2 --- Literature Reviews --- p.18 / Chapter 2.1 --- Fingerprint matching overviews --- p.18 / Chapter 2.1.1 --- Minutiae-based fingerprint matching --- p.20 / Chapter 2.2 --- Fingerprint image enhancement --- p.21 / Chapter 2.3 --- Orientation field Computation --- p.22 / Chapter 2.4 --- Fingerprint Segmentation --- p.24 / Chapter 2.5 --- Singularity Detection --- p.25 / Chapter 2.6 --- Fingerprint Classification --- p.27 / Chapter 2.7 --- Minutia extraction --- p.30 / Chapter 2.7.1 --- Binarization and thinning --- p.30 / Chapter 2.7.2 --- Direct gray scale approach --- p.32 / Chapter 2.7.3 --- Comparison of the minutiae extraction approaches --- p.35 / Chapter 2.8 --- Minutiae matching --- p.37 / Chapter 2.8.1 --- Point matching --- p.37 / Chapter 2.8.2 --- Structural matching technique --- p.38 / Chapter 2.9 --- Summary --- p.40 / Chapter 3. --- Implementation --- p.41 / Chapter 3.1 --- Fast Fingerprint Matching System Overview --- p.41 / Chapter 3.1.1 --- Typical Fingerprint Matching System --- p.41 / Chapter 3.1.2. --- Fast Fingerprint Matching System Overview --- p.41 / Chapter 3.2 --- Orientation computation --- p.43 / Chapter 3.21 --- Orientation computation --- p.43 / Chapter 3.22 --- Smooth orientation field --- p.43 / Chapter 3.3 --- Fingerprint image segmentation --- p.45 / Chapter 3.4 --- Reference Point Extraction --- p.46 / Chapter 3.5 --- A Classification Scheme --- p.51 / Chapter 3.6 --- Finding A Small Fingerprint Matching Area --- p.54 / Chapter 3.7 --- Fingerprint Matching --- p.57 / Chapter 3.8 --- Minutiae extraction --- p.59 / Chapter 3.8.1 --- Ridge tracing --- p.59 / Chapter 3.8.2 --- cross sectioning --- p.60 / Chapter 3.8.3 --- local maximum determination --- p.61 / Chapter 3.8.4 --- Ridge tracing marking --- p.62 / Chapter 3.8.5 --- Ridge tracing stop criteria --- p.63 / Chapter 3.9 --- Optimization technique --- p.65 / Chapter 3.10 --- Summary --- p.66 / Chapter 4. --- Experimental results --- p.67 / Chapter 4.1 --- Experimental setup --- p.67 / Chapter 4.2 --- Fingerprint database --- p.67 / Chapter 4.3 --- Reference point accuracy --- p.67 / Chapter 4.4 --- Variable number of matching minutiae results --- p.68 / Chapter 4.5 --- Contribution of the verification prototype --- p.72 / Chapter 5. --- Conclusion and Future Research --- p.74 / Chapter 5.1 --- Conclusion --- p.74 / Chapter 5.2 --- Future Research --- p.74 / Bibliography --- p.77
427

A new approach to clustering large databases in data mining.

January 2004 (has links)
Lau Hei Yuet. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2004. / Includes bibliographical references (leaves 74-76). / Abstracts in English and Chinese. / Abstract --- p.i / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Cluster Analysis --- p.1 / Chapter 1.2 --- Dissimilarity Measures --- p.3 / Chapter 1.2.1 --- Continuous Data --- p.4 / Chapter 1.2.2 --- Categorical and Nominal Data --- p.4 / Chapter 1.2.3 --- Mixed Data --- p.5 / Chapter 1.2.4 --- Missing Data --- p.6 / Chapter 1.3 --- Outline of the thesis --- p.6 / Chapter 2 --- Clustering Algorithms --- p.9 / Chapter 2.1 --- The k-means Algorithm Family --- p.9 / Chapter 2.1.1 --- The Algorithms --- p.9 / Chapter 2.1.2 --- Choosing the Number of Clusters - the MaxMin Algo- rithm --- p.12 / Chapter 2.1.3 --- Starting Configuration - the MaxMin Algorithm --- p.16 / Chapter 2.2 --- Clustering Using Unidimensional Scaling --- p.16 / Chapter 2.2.1 --- Unidimensional Scaling --- p.16 / Chapter 2.2.2 --- Procedures --- p.17 / Chapter 2.2.3 --- Guttman's Updating Algorithm --- p.18 / Chapter 2.2.4 --- Pliner's Smoothing Algorithm --- p.18 / Chapter 2.2.5 --- Starting Configuration --- p.19 / Chapter 2.2.6 --- Choosing the Number of Clusters --- p.21 / Chapter 2.3 --- Cluster Validation --- p.23 / Chapter 2.3.1 --- Continuous Data --- p.23 / Chapter 2.3.2 --- Nominal Data --- p.24 / Chapter 2.3.3 --- Resampling Method --- p.25 / Chapter 2.4 --- Conclusion --- p.27 / Chapter 3 --- Experimental Results --- p.29 / Chapter 3.1 --- Simulated Data 1 --- p.29 / Chapter 3.2 --- Simulated Data 2 --- p.35 / Chapter 3.3 --- Iris Data --- p.41 / Chapter 3.4 --- Wine Data --- p.47 / Chapter 3.5 --- Mushroom Data --- p.53 / Chapter 3.6 --- Conclusion --- p.59 / Chapter 4 --- Large Database --- p.61 / Chapter 4.1 --- Sliding Windows Algorithm --- p.61 / Chapter 4.2 --- Two-stage Algorithm --- p.63 / Chapter 4.3 --- Three-stage Algorithm --- p.65 / Chapter 4.4 --- Experimental Results --- p.66 / Chapter 4.5 --- Conclusion --- p.68 / Chapter A --- Algorithms --- p.69 / Chapter A.1 --- MaxMin Algorithm --- p.69 / Chapter A.2 --- Sliding Windows Algorithm --- p.70 / Chapter A.3 --- Two-stage Algorithm - Stage One --- p.72 / Chapter A.4 --- Two-stage Algorithm - Stage Two --- p.73 / Bibliography --- p.74
428

Induction of classification rules and decision trees using genetic algorithms.

January 2005 (has links)
Ng Sai-Cheong. / Thesis submitted in: December 2004. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2005. / Includes bibliographical references (leaves 172-178). / Abstracts in English and Chinese. / Abstract --- p.i / Acknowledgement --- p.iii / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Data Mining --- p.1 / Chapter 1.2 --- Problem Specifications and Motivations --- p.3 / Chapter 1.3 --- Contributions of the Thesis --- p.5 / Chapter 1.4 --- Thesis Roadmap --- p.6 / Chapter 2 --- Related Work --- p.9 / Chapter 2.1 --- Supervised Classification Techniques --- p.9 / Chapter 2.1.1 --- Classification Rules --- p.9 / Chapter 2.1.2 --- Decision Trees --- p.11 / Chapter 2.2 --- Evolutionary Algorithms --- p.19 / Chapter 2.2.1 --- Genetic Algorithms --- p.19 / Chapter 2.2.2 --- Genetic Programming --- p.24 / Chapter 2.2.3 --- Evolution Strategies --- p.26 / Chapter 2.2.4 --- Evolutionary Programming --- p.32 / Chapter 2.3 --- Applications of Evolutionary Algorithms to Induction of Classification Rules --- p.33 / Chapter 2.3.1 --- SCION --- p.33 / Chapter 2.3.2 --- GABIL --- p.34 / Chapter 2.3.3 --- LOGENPRO --- p.35 / Chapter 2.4 --- Applications of Evolutionary Algorithms to Construction of Decision Trees --- p.35 / Chapter 2.4.1 --- Binary Tree Genetic Algorithm --- p.35 / Chapter 2.4.2 --- OC1-GA --- p.36 / Chapter 2.4.3 --- OC1-ES --- p.38 / Chapter 2.4.4 --- GATree --- p.38 / Chapter 2.4.5 --- Induction of Linear Decision Trees using Strong Typing GP --- p.39 / Chapter 2.5 --- Spatial Data Structures and its Applications --- p.40 / Chapter 2.5.1 --- Spatial Data Structures --- p.40 / Chapter 2.5.2 --- Applications of Spatial Data Structures --- p.42 / Chapter 3 --- Induction of Classification Rules using Genetic Algorithms --- p.45 / Chapter 3.1 --- Introduction --- p.45 / Chapter 3.2 --- Rule Learning using Genetic Algorithms --- p.46 / Chapter 3.2.1 --- Population Initialization --- p.47 / Chapter 3.2.2 --- Fitness Evaluation of Chromosomes --- p.49 / Chapter 3.2.3 --- Token Competition --- p.50 / Chapter 3.2.4 --- Chromosome Elimination --- p.51 / Chapter 3.2.5 --- Rule Migration --- p.52 / Chapter 3.2.6 --- Crossover --- p.53 / Chapter 3.2.7 --- Mutation --- p.55 / Chapter 3.2.8 --- Calculating the Number of Correctly Classified Training Samples in a Rule Set --- p.56 / Chapter 3.3 --- Performance Evaluation --- p.56 / Chapter 3.3.1 --- Performance Comparison of the GA-based CPRLS and Various Supervised Classifi- cation Algorithms --- p.57 / Chapter 3.3.2 --- Performance Comparison of the GA-based CPRLS and RS-based CPRLS --- p.68 / Chapter 3.3.3 --- Effects of Token Competition --- p.69 / Chapter 3.3.4 --- Effects of Rule Migration --- p.70 / Chapter 3.4 --- Chapter Summary --- p.73 / Chapter 4 --- Genetic Algorithm-based Quadratic Decision Trees --- p.74 / Chapter 4.1 --- Introduction --- p.74 / Chapter 4.2 --- Construction of Quadratic Decision Trees --- p.76 / Chapter 4.3 --- Evolving the Optimal Quadratic Hypersurface using Genetic Algorithms --- p.77 / Chapter 4.3.1 --- Population Initialization --- p.80 / Chapter 4.3.2 --- Fitness Evaluation --- p.81 / Chapter 4.3.3 --- Selection --- p.81 / Chapter 4.3.4 --- Crossover --- p.82 / Chapter 4.3.5 --- Mutation --- p.83 / Chapter 4.4 --- Performance Evaluation --- p.84 / Chapter 4.4.1 --- Performance Comparison of the GA-based QDT and Various Supervised Classification Algorithms --- p.85 / Chapter 4.4.2 --- Performance Comparison of the GA-based QDT and RS-based QDT --- p.92 / Chapter 4.4.3 --- Effects of Changing Parameters of the GA-based QDT --- p.93 / Chapter 4.5 --- Chapter Summary --- p.109 / Chapter 5 --- Induction of Linear and Quadratic Decision Trees using Spatial Data Structures --- p.111 / Chapter 5.1 --- Introduction --- p.111 / Chapter 5.2 --- Construction of k-D Trees --- p.113 / Chapter 5.3 --- Construction of Generalized Quadtrees --- p.119 / Chapter 5.4 --- Induction of Oblique Decision Trees using Spatial Data Structures --- p.124 / Chapter 5.5. --- Induction of Quadratic Decision Trees using Spatial Data Structures --- p.130 / Chapter 5.6 --- Performance Evaluation --- p.139 / Chapter 5.6.1 --- Performance Comparison with Various Supervised Classification Algorithms --- p.142 / Chapter 5.6.2 --- Effects of Changing the Minimum Number of Training Samples at Each Node of a k-D Tree --- p.155 / Chapter 5.6.3 --- Effects of Changing the Minimum Number of Training Samples at Each Node of a Generalized Quadtree --- p.157 / Chapter 5.6.4 --- Effects of Changing the Size of Datasets . --- p.158 / Chapter 5.7 --- Chapter Summary --- p.160 / Chapter 6 --- Conclusions --- p.164 / Chapter 6.1 --- Contributions --- p.164 / Chapter 6.2 --- Future Work --- p.167 / Chapter A --- Implementation of Data Mining Algorithms Specified in the Thesis --- p.170 / Bibliography --- p.178
429

Design and control of a controllable hybrid mechanical metal forming press. / CUHK electronic theses & dissertations collection

January 2008 (has links)
A real-time dynamic feedback control system is developed. An improved PID algorithm, called the integral separated piecewise PID scheme, is used in the control system. This algorithm is able to limit the contribution of the integral component in the PID calculation to avoid integral windup. In addition, it could use different PID parameters to adapt to different segments within one punch motion cycle. Hence, the error of the punch motion, either resulting from the machine assembly or from the machine dynamics, can be compensated by tuning the velocity of the servomotor. This is a unique feature of the new press that ensures its accuracy. / Based on the novel structure, the detailed design is then carried out, which includes the mechanical design, kinematics and inverse kinematics analysis, static force analysis, parametric design and the other related designs. A calibration method based on the experiment and computer simulation is proposed for the new press, which is also useful for the parallel mechanisms. Cooperated with Guangdong Metal Forming Machine Works Co. Ltd., a 250 KN prototype has been built and tested. / In order to ensure the desirable performance, dynamic control is necessary. The thesis uses two dynamic modeling methods to study the dynamics of the press. One is the kineto-static method. It is also called D'Alembert principle which rearranges Newton's second law and transfers a dynamic problem to an equivalent static problem by adding the inertial forces and inertial torques onto the system. The model can then be analyzed easily and exactly as a static system subjected to the inertial forces and torques and the external forces. The other method is the Lagrangian method which derives the dynamic model from the energy perspective. Based on the model, the dynamics of the press is studied by means of computer simulation and is validated experimentally. / In this thesis, a controllable hybrid mechanical metal forming press is developed, which is driven by a CSM with a flywheel and a servomotor. From a mechanism point of view, it is a closed-loop 2-DOF parallel planar five-bar mechanism with four resolute joints and one prismatic joint. Thanks to the usage of the servomotor, the punch motion of the new press can be controlled by tuning the velocity of the servomotor. Accordingly, desired punch motions for different stamping processes can be obtained. In other words, the new press is flexible and controllable like the servo mechanical press and the hydraulic press. Moreover, the CSM with flywheel provides the main power during the stamping operation, and hence, it is energy efficient. In addition, it is not expensive to build, as it uses only a small servomotor. / Metal forming is one of the oldest production processes and yet, is still one of the most commonly used processes today. Everyday, millions of parts are produced by metal forming ranging from battery caps to automobile body panels. Therefore, even a small improvement may add to significant corporative gain. / The thesis also describes the trajectory planning method for the press, which is based on the combination of the inverse kinematics and cubic spline interpolation. The trajectory is optimized under multiple constraints on velocity, acceleration and jerk of the servomotor. It guarantees the new press is controllable and energy efficient. / Two typical stamping processes, drawing and forging, are taken as examples for the operations of the new press. The results of the simulation and the experiment match well. Based on the simulation and experiments, it is found that the CSM provides the main power for the metal forming operations, while the servomotor is mainly responsible for overcoming the inertia forces to realize the desired punch motion. The experiments show that the new press is energy efficient, fast, controllable and inexpensive to build. It combines the advantages of both mechanical press and hydraulic press and has a good performance. It is expected the new press will have a great potential for the metal forming industry. (Abstract shortened by UMI.) / He, Kai. / "February 2008." / Adviser: Ruxu Du. / Source: Dissertation Abstracts International, Volume: 70-03, Section: B, page: 1902. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2008. / Includes bibliographical references (p. 147-149). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstracts in English and Chinese. / School code: 1307.
430

Mining a shared concept space for domain adaptation in text mining. / CUHK electronic theses & dissertations collection

January 2011 (has links)
In many text mining applications involving high-dimensional feature space, it is difficult to collect sufficient training data for different domains. One strategy to tackle this problem is to intelligently adapt the trained model from one domain with labeled data to another domain with only unlabeled data. This strategy is known as domain adaptation. However, there are two major limitations of the existing domain adaptation approaches. The first limitation is that they all separate the domain adaptation framework into two separate steps. The first step attempts to minimize the domain gap, and then the second step is to train the predictive model based. on the reweighted instances or transformed feature representation. However, such a transformed representation may encode less information affecting the predictive performance. The second limitation is that they are restricted to using the first-order statistics in a Reproducing Kernel Hilbert Space (RKHS) to measure the distribution difference between the source domain and the target domain. In this thesis, we focus on developing solutions for those two limitations hindering the progress of domain adaptation techniques. / Then we propose an improved symmetric Stein's loss (SSL) function which combines the mean and covariance discrepancy into a unified Bregman matrix divergence of which Jensen-Shannon divergence between normal distributions is a particular case. Based on our proposed distribution gap measure based on second-order statistics, we present another new domain adaptation method called Location and Scatter Matching. The target is to find a good feature representation which can reduce the embedded distribution gap measured by SSL between the source domain and the target domain, at the same time, ensure the new derived representation can encode sufficient discriminants with respect to the label information. Then a standard machine learning algorithm, such as Support Vector Machine (SYM), can be adapted to train classifiers in the new feature subspace across domains. / We conduct a series of experiments on real-world datasets to demonstrate the performance of our proposed approaches comparing with other competitive methods. The results show significant improvement over existing domain adaptation approaches. / We develop a novel model to learn a low-rank shared concept space with respect to two criteria simultaneously: the empirical loss in the source domain, and the embedded distribution gap between the source domain and the target domain. Besides, we can transfer the predictive power from the extracted common features to the characteristic features in the target domain by the feature graph Laplacian. Moreover, we can kernelize our proposed method in the Reproducing Kernel Hilbert Space (RKHS) so as to generalize our model by making use of the powerful kernel functions. We theoretically analyze the expected error evaluated by common convex loss functions in the target domain under the empirical risk minimization framework, showing that the error bound can be controlled by the expected loss in the source domain, and the embedded distribution gap. / Chen, Bo. / Adviser: Wai Lam. / Source: Dissertation Abstracts International, Volume: 73-04, Section: B, page: . / Thesis (Ph.D.)--Chinese University of Hong Kong, 2011. / Includes bibliographical references (leaves 87-95). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [201-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstract also in Chinese.

Page generated in 0.1151 seconds