• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7433
  • 1103
  • 1048
  • 794
  • 476
  • 291
  • 237
  • 184
  • 90
  • 81
  • 63
  • 52
  • 44
  • 43
  • 42
  • Tagged with
  • 14406
  • 9224
  • 3943
  • 2366
  • 1924
  • 1915
  • 1721
  • 1624
  • 1513
  • 1439
  • 1373
  • 1354
  • 1341
  • 1275
  • 1269
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
691

IMPROVING THE REALISM OF SYNTHETIC IMAGES THROUGH THE MIXTURE OF ADVERSARIAL AND PERCEPTUAL LOSSES

Atapattu, Charith Nisanka 01 December 2018 (has links)
This research is describing a novel method to generate realism improved synthetic images while preserving annotation information and the eye gaze direction. Furthermore, it describes how the perceptual loss can be utilized while introducing basic features and techniques from adversarial networks for better results.
692

Image representation, processing and analysis by support vector regression. / 支援矢量回歸法之影像表示式及其影像處理與分析 / Image representation, processing and analysis by support vector regression. / Zhi yuan shi liang hui gui fa zhi ying xiang biao shi shi ji qi ying xiang chu li yu fen xi

January 2001 (has links)
Chow Kai Tik = 支援矢量回歸法之影像表示式及其影像處理與分析 / 周啓迪. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2001. / Includes bibliographical references (leaves 380-383). / Text in English; abstracts in English and Chinese. / Chow Kai Tik = Zhi yuan shi liang hui gui fa zhi ying xiang biao shi shi ji qi ying xiang chu li yu fen xi / Zhou Qidi. / Abstract in English / Abstract in Chinese / Acknowledgement / Content / List of figures / Chapter Chapter 1 --- Introduction --- p.1-11 / Chapter 1.1 --- Introduction --- p.2 / Chapter 1.2 --- Road Map --- p.9 / Chapter Chapter 2 --- Review of Support Vector Machine --- p.12-124 / Chapter 2.1 --- Structural Risk Minimization (SRM) --- p.13 / Chapter 2.1.1 --- Introduction / Chapter 2.1.2 --- Structural Risk Minimization / Chapter 2.2 --- Review of Support Vector Machine --- p.21 / Chapter 2.2.1 --- Review of Support Vector Classification / Chapter 2.2.2 --- Review of Support Vector Regression / Chapter 2.2.3 --- Review of Support Vector Clustering / Chapter 2.2.4 --- Summary of Support Vector Machines / Chapter 2.3 --- Implementation of Support Vector Machines --- p.60 / Chapter 2.3.1 --- Kernel Adatron for Support Vector Classification (KA-SVC) / Chapter 2.3.2 --- Kernel Adatron for Support Vector Regression (KA-SVR) / Chapter 2.3.3 --- Sequential Minimal Optimization for Support Vector Classification (SMO-SVC) / Chapter 2.3.4 --- Sequential Minimal Optimization for Support Vector Regression (SMO-SVR) / Chapter 2.3.5 --- Lagrangian Support Vector Classification (LSVC) / Chapter 2.3.6 --- Lagrangian Support Vector Regression (LSVR) / Chapter 2.4 --- Applications of Support Vector Machines --- p.117 / Chapter 2.4.1 --- Applications of Support Vector Classification / Chapter 2.4.2 --- Applications of Support Vector Regression / Chapter Chapter 3 --- Image Representation by Support Vector Regression --- p.125-183 / Chapter 3.1 --- Introduction of SVR Representation --- p.116 / Chapter 3.1.1 --- Image Representation by SVR / Chapter 3.1.2 --- Implicit Smoothing of SVR representation / Chapter 3.1.3 --- "Different Insensitivity, C value, Kernel and Kernel Parameters" / Chapter 3.2 --- Variation on Encoding Method [Training Process] --- p.154 / Chapter 3.2.1 --- Training SVR with Missing Data / Chapter 3.2.2 --- Training SVR with Image Blocks / Chapter 3.2.3 --- Training SVR with Other Variations / Chapter 3.3 --- Variation on Decoding Method [Testing pr Reconstruction Process] --- p.171 / Chapter 3.3.1 --- Reconstruction with Different Portion of Support Vectors / Chapter 3.3.2 --- Reconstruction with Different Support Vector Locations and Lagrange Multiplier Values / Chapter 3.3.3 --- Reconstruction with Different Kernels / Chapter 3.4 --- Feature Extraction --- p.177 / Chapter 3.4.1 --- Features on Simple Shape / Chapter 3.4.2 --- Invariant of Support Vector Features / Chapter Chapter 4 --- Mathematical and Physical Properties of SYR Representation --- p.184-243 / Chapter 4.1 --- Introduction of RBF Kernel --- p.185 / Chapter 4.2 --- Mathematical Properties: Integral Properties --- p.187 / Chapter 4.2.1 --- Integration of an SVR Image / Chapter 4.2.2 --- Fourier Transform of SVR Image (Hankel Transform of Kernel) / Chapter 4.2.3 --- Cross Correlation between SVR Images / Chapter 4.2.4 --- Convolution of SVR Images / Chapter 4.3 --- Mathematical Properties: Differential Properties --- p.219 / Chapter 4.3.1 --- Review of Differential Geometry / Chapter 4.3.2 --- Gradient of SVR Image / Chapter 4.3.3 --- Laplacian of SVR Image / Chapter 4.4 --- Physical Properties --- p.228 / Chapter 4.4.1 --- 7Transformation between Reconstructed Image and Lagrange Multipliers / Chapter 4.4.2 --- Relation between Original Image and SVR Approximation / Chapter 4.5 --- Appendix --- p.234 / Chapter 4.5.1 --- Hankel Transform for Common Functions / Chapter 4.5.2 --- Hankel Transform for RBF / Chapter 4.5.3 --- Integration of Gaussian / Chapter 4.5.4 --- Chain Rules for Differential Geometry / Chapter 4.5.5 --- Derivation of Gradient of RBF / Chapter 4.5.6 --- Derivation of Laplacian of RBF / Chapter Chapter 5 --- Image Processing in SVR Representation --- p.244-293 / Chapter 5.1 --- Introduction --- p.245 / Chapter 5.2 --- Geometric Transformation --- p.241 / Chapter 5.2.1 --- "Brightness, Contrast and Image Addition" / Chapter 5.2.2 --- Interpolation or Resampling / Chapter 5.2.3 --- Translation and Rotation / Chapter 5.2.4 --- Affine Transformation / Chapter 5.2.5 --- Transformation with Given Optical Flow / Chapter 5.2.6 --- A Brief Summary / Chapter 5.3 --- SVR Image Filtering --- p.261 / Chapter 5.3.1 --- Discrete Filtering in SVR Representation / Chapter 5.3.2 --- Continuous Filtering in SVR Representation / Chapter Chapter 6 --- Image Analysis in SVR Representation --- p.294-370 / Chapter 6.1 --- Contour Extraction --- p.295 / Chapter 6.1.1 --- Contour Tracing by Equi-potential Line [using Gradient] / Chapter 6.1.2 --- Contour Smoothing and Contour Feature Extraction / Chapter 6.2 --- Registration --- p.304 / Chapter 6.2.1 --- Registration using Cross Correlation / Chapter 6.2.2 --- Registration using Phase Correlation [Phase Shift in Fourier Transform] / Chapter 6.2.3 --- Analysis of the Two Methods for Registrationin SVR Domain / Chapter 6.3 --- Segmentation --- p.347 / Chapter 6.3.1 --- Segmentation by Contour Tracing / Chapter 6.3.2 --- Segmentation by Thresholding on Smoothed or Sharpened SVR Image / Chapter 6.3.3 --- Segmentation by Thresholding on SVR Approximation / Chapter 6.4 --- Appendix --- p.368 / Chapter Chapter 7 --- Conclusion --- p.371-379 / Chapter 7.1 --- Conclusion and contribution --- p.372 / Chapter 7.2 --- Future work --- p.378 / Reference --- p.380-383
693

A novel fuzzy first-order logic learning system.

January 2002 (has links)
Tse, Ming Fun. / Thesis submitted in: December 2001. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2002. / Includes bibliographical references (leaves 142-146). / Abstracts in English and Chinese. / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Problem Definition --- p.2 / Chapter 1.2 --- Contributions --- p.3 / Chapter 1.3 --- Thesis Outline --- p.4 / Chapter 2 --- Literature Review --- p.6 / Chapter 2.1 --- Representing Inexact Knowledge --- p.7 / Chapter 2.1.1 --- Nature of Inexact Knowledge --- p.7 / Chapter 2.1.2 --- Probability Based Reasoning --- p.8 / Chapter 2.1.3 --- Certainty Factor Algebra --- p.11 / Chapter 2.1.4 --- Fuzzy Logic --- p.13 / Chapter 2.2 --- Machine Learning Paradigms --- p.13 / Chapter 2.2.1 --- Classifications --- p.14 / Chapter 2.2.2 --- Neural Networks and Gradient Descent --- p.15 / Chapter 2.3 --- Related Learning Systems --- p.21 / Chapter 2.3.1 --- Relational Concept Learning --- p.21 / Chapter 2.3.2 --- Learning of Fuzzy Concepts --- p.24 / Chapter 2.4 --- Fuzzy Logic --- p.26 / Chapter 2.4.1 --- Fuzzy Set --- p.27 / Chapter 2.4.2 --- Basic Notations in Fuzzy Logic --- p.29 / Chapter 2.4.3 --- Basic Operations on Fuzzy Sets --- p.29 / Chapter 2.4.4 --- "Fuzzy Relations, Projection and Cylindrical Extension" --- p.31 / Chapter 2.4.5 --- Fuzzy First Order Logic and Fuzzy Prolog --- p.34 / Chapter 3 --- Knowledge Representation and Learning Algorithm --- p.43 / Chapter 3.1 --- Knowledge Representation --- p.44 / Chapter 3.1.1 --- Fuzzy First-order Logic ´ؤ A Powerful Language --- p.44 / Chapter 3.1.2 --- Literal Forms --- p.48 / Chapter 3.1.3 --- Continuous Variables --- p.50 / Chapter 3.2 --- System Architecture --- p.61 / Chapter 3.2.1 --- Data Reading --- p.61 / Chapter 3.2.2 --- Preprocessing and Postprocessing --- p.67 / Chapter 4 --- Global Evaluation of Literals --- p.71 / Chapter 4.1 --- Existing Closeness Measures between Fuzzy Sets --- p.72 / Chapter 4.2 --- The Error Function and the Normalized Error Functions --- p.75 / Chapter 4.2.1 --- The Error Function --- p.75 / Chapter 4.2.2 --- The Normalized Error Functions --- p.76 / Chapter 4.3 --- The Nodal Characteristics and the Error Peaks --- p.79 / Chapter 4.3.1 --- The Nodal Characteristics --- p.79 / Chapter 4.3.2 --- The Zero Error Line and the Error Peaks --- p.80 / Chapter 4.4 --- Quantifying the Nodal Characteristics --- p.85 / Chapter 4.4.1 --- Information Theory --- p.86 / Chapter 4.4.2 --- Applying the Information Theory --- p.88 / Chapter 4.4.3 --- Upper and Lower Bounds of CE --- p.89 / Chapter 4.4.4 --- The Whole Heuristics of FF99 --- p.93 / Chapter 4.5 --- An Example --- p.94 / Chapter 5 --- Partial Evaluation of Literals --- p.99 / Chapter 5.1 --- Importance of Covering in Inductive Learning --- p.100 / Chapter 5.1.1 --- The Divide-and-conquer Method --- p.100 / Chapter 5.1.2 --- The Covering Method --- p.101 / Chapter 5.1.3 --- Effective Pruning in Both Methods --- p.102 / Chapter 5.2 --- Fuzzification of FOIL --- p.104 / Chapter 5.2.1 --- Analysis of FOIL --- p.104 / Chapter 5.2.2 --- Requirements on System Fuzzification --- p.107 / Chapter 5.2.3 --- Possible Ways in Fuzzifing FOIL --- p.109 / Chapter 5.3 --- The α Covering Method --- p.111 / Chapter 5.3.1 --- Construction of Partitions by α-cut --- p.112 / Chapter 5.3.2 --- Adaptive-α Covering --- p.112 / Chapter 5.4 --- The Probabistic Covering Method --- p.114 / Chapter 6 --- Results and Discussions --- p.119 / Chapter 6.1 --- Experimental Results --- p.120 / Chapter 6.1.1 --- Iris Plant Database --- p.120 / Chapter 6.1.2 --- Kinship Relational Domain --- p.122 / Chapter 6.1.3 --- The Fuzzy Relation Domain --- p.129 / Chapter 6.1.4 --- Age Group Domain --- p.134 / Chapter 6.1.5 --- The NBA Domain --- p.135 / Chapter 6.2 --- Future Development Directions --- p.137 / Chapter 6.2.1 --- Speed Improvement --- p.137 / Chapter 6.2.2 --- Accuracy Improvement --- p.138 / Chapter 6.2.3 --- Others --- p.138 / Chapter 7 --- Conclusion --- p.140 / Bibliography --- p.142 / Chapter A --- C4.5 to FOIL File Format Conversion --- p.147 / Chapter B --- FF99 example --- p.150
694

Multi-Axes CNC Turn-Mill-Hob Machining Center and Its applications in biomedical engineering. / CUHK electronic theses & dissertations collection

January 2012 (has links)
随着对减小零件尺寸和增加其复杂性和准确性的日益增加的需求,传统机床已经不能有效的加工微型元件了。一个典型的例子是牙科种植体(生物医学设备)和和用于机械手表机芯的齿轮轴。由于这些零件的复杂几何形状和严格的公差要求,市面上只有很少一部分机床有能力加工它们。我们设计的多轴数控“车削-铣削-滚齿加工中心对加工精密复杂工程零件是非常有效的。此机器为8轴机床,除了执行加工的8轴外,还有一个自动上料机构和一个自动收集机构,可以实现自动上料,加工,收集等整条生产线的运作。运用电子齿轮技术以保证精密滚齿功能;运用先进控制技术(深层交叉耦合技术)以保证多轴的同步控制,实现加工的高效,高精度,并且容易使用。另外,为了保障机床精度,我们研发了多轴数控机床几何误差的软件补偿技术。根据实验测试,此加工中心的车削精度为0.003毫米,铣削精度为0.005毫米,滚齿误差小于0.0075毫米。 / 众多的生物医学零件是轴不对称零件。虽然这些零件可以用传统的数控加工方法进行加工,但是效率极低且成本高。而基于我们加工中心的新型铣削方法可以有效、高精度的加工这些零件。这种方法是运用极坐标的插值原理,比利用笛卡尔直角坐标系加工的原理更加优越,特别是当需要一个线性轴和旋转轴插值生成曲线时。为了方便使用这个极坐标插值模块,我们开发了一系列特殊的极坐标加工G代码。整个开发的程序模块最终融入我们多轴数控“车削-铣削-滚齿“加工中心。 / 另外一个重要的发现是运用滚齿方法加工轴对称和轴不对称零件。从滚齿方法被发明出来的这100年中,其一直是最有效的加工齿轮的方式。它的高效是由于多个刀齿同时切削工件。现在,滚齿是一种标准的加工方式并且每天运用这种方法加工几百万个零件。但是,没有人用这种方法加工轴不对称零件。经过仔细研究滚齿原来,可以得出以下观点:一)齿轮的齿形是与滚刀的齿形一样的;二)齿轮轮廓是由工件和滚刀的相对位置确定的。把滚刀设计和控制工件和滚刀的相对位置结合起来,我们发现运用滚齿的方法是可以加工各种轴对称和非对称部分,例如:星形零件和多边形零件。特别是,该方法可以有效的加工不断变化的轴不对称零件。最后,我们比较其的加工效率和传统的铣削加工,结果验证运用这种方法的加工时间远小于采用铣削方法。 / 我们设计的加工中心和新型加工方法在生物医学工程有很多的应有。牙科种植体就是一个典型的例子。具权威机构统计,约有10%的人会在一生中选用种植牙技术对牙齿进行修复。但是不幸的是,没有人研究个性化种植体。目前,市面上的种植体并不能精确的适合病人牙根情况,完成特殊口腔环境的牙齿修复。所以,对个性化种植体的研究是迫切并具有市场效益的。关于个性化种植体研制的一个难点是其的制造。个性化种植体之所以难加工是由于它的复杂形状及所用材料(钛)。但是,我们设计的多轴数控“车削-铣削-滚齿“加工中心和基于此机床的新型加工方法可以有效、高精度的加工此种植体。 / With the ever increasing demand for reduced size and increased complexity and accuracy, traditional machine tools have become ineffective for machining miniature components. A typical example is the dental implant and the other is the pinion used mechanical watch movement. With complex geometry and tight tolerance, few machine tools are capable of making these parts. We designed and built a CNC Turn-Mill-Hob Machining Center that is capable of machining various complex miniature parts. The machining center has 8 axes, an automatic bar feeder, an automatic part collection tray, and a custom-made CNC controller. In particularly, the CNC controller gives not only higher accuracy but also ease of use. In addition, to improve the accuracy, a software based volumetric error compensation system is implemented. Based on the experiment testing, the machining error is ± 4 μm for turning, ± 7 μm for milling, and the maximum profile error is less than ± 7.5 μm for gear hobbing. / Many biomedical parts are axial asymmetric parts. While these parts can be machined using conventional CNC machining methods, the efficiency is low and the cost is high. We proposed a new CNC machining method based on polar coordinate interpolation, which is better than the Cartesian coordinate interpolation when rotational axes are involved. To facilitate the use the polar coordinate interpolation module, a special G code is developed. This module is integrated into our CNC Turn-Mill-Hob Machining Center. / Another important development is the use of hobbing method for machining axial symmetric / asymmetric parts. Invented some 100 years ago, hobbing is the most efficient method for machining gears. Its efficiency lies on multiple teeth simultaneous cutting. Presently, gear hobbing is a standard manufacturing process making millions of gears every day. Though, no one has used it for machining axial asymmetrical parts. After carefully examining the gear hobbing, it is found that the profile of the gear tooth is determined by a combination of the profile of the hob tooth and the relative position and motion between the hob and the workpiece. Therefore, by tuning the hob tooth profile and controlling the relative position and motion between the hob and the workpiece, it is possible to machine various axial symmetrical and asymmetrical parts, such as a start, a hexagon and etc. This method is efficient to machine continuously changed axial asymmetrical parts. This is validated by means of experiments. The experiments also indicate that the new method is much more efficient than the conventional milling method. / Our machining center and new machining methods have many practical applications. Dental implant is a typical example. It is estimated that 10% of the people will need dental implants in their life time. Presently, there are a number of brands in the market, though these implants may not fit for patients who have special oral conditions. In this case, custom-made implants are necessary. The key problem of the custom-made dental implant is manufacturing. Our multi-axes CNC Turn-Mill-Hob Machining Center and the new machining method can effectively machine the custom-made dental implants. Moreover, the efficient is good. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Chen, Xianshuai. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2012. / Includes bibliographical references (leaves 116-127). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstract also in Chinese. / Abstract --- p.I / 摘要 --- p.III / Acknowledgement --- p.V / Table of Contents --- p.VI / List of Tables --- p.VIII / List of Figures --- p.IX / Acronym --- p.XIII / Chapter Chapter 1: --- Introduction --- p.1 / Chapter 1.1 --- Background --- p.1 / Chapter 1.2 --- Overall Literature Review --- p.3 / Chapter 1.3 --- Objectives --- p.17 / Chapter Chapter 2: --- The Multi-Axes CNC Turn-Mill-Hob Machining Center --- p.18 / Chapter 2.1 --- A Brief Review --- p.18 / Chapter 2.2 --- The Design and Prototype --- p.20 / Chapter 2.3 --- The CNC Controller --- p.26 / Chapter 2.4 --- The Calibration --- p.32 / Chapter 2.5 --- Cutting Tests --- p.35 / Chapter 2.6 --- Summary --- p.43 / Chapter Chapter 3: --- Hobbing Gears and Axial Asymmetric Parts --- p.45 / Chapter 3.1 --- A Brief Review --- p.45 / Chapter 3.2 --- The Theory --- p.47 / Chapter 3.3 --- Computer Simulation --- p.54 / Chapter 3.4 --- Cutting Tests --- p.68 / Chapter 3.5 --- Summary --- p.78 / Chapter Chapter 4: --- Millining Axial Asymmetric Parts --- p.80 / Chapter 4.1 --- A Brief Review --- p.80 / Chapter 4.2 --- The Theory --- p.81 / Chapter 4.3 --- Cutting Tests --- p.89 / Chapter 4.4 --- Summary --- p.94 / Chapter Chapter 5: --- Machining Dental Implants --- p.95 / Chapter 5.1 --- A Brief Review --- p.95 / Chapter 5.2 --- The Database of Custom-made Dental Implant --- p.98 / Chapter 5.3 --- The Design and FEA --- p.103 / Chapter 5.4 --- Cutting Tests --- p.108 / Chapter 5.5 --- Summary --- p.110 / Chapter Chapter 6: --- Concluding Remarks and Future Work --- p.111 / Chapter 6.1 --- Concluding Remarks --- p.111 / Chapter 6.2 --- Future Work --- p.113 / Bibliography --- p.116 / Publication Record --- p.127
695

Autonomous visual learning for robotic systems

Beale, Dan January 2012 (has links)
This thesis investigates the problem of visual learning using a robotic platform. Given a set of objects the robots task is to autonomously manipulate, observe, and learn. This allows the robot to recognise objects in a novel scene and pose, or separate them into distinct visual categories. The main focus of the work is in autonomously acquiring object models using robotic manipulation. Autonomous learning is important for robotic systems. In the context of vision, it allows a robot to adapt to new and uncertain environments, updating its internal model of the world. It also reduces the amount of human supervision needed for building visual models. This leads to machines which can operate in environments with rich and complicated visual information, such as the home or industrial workspace; also, in environments which are potentially hazardous for humans. The hypothesis claims that inducing robot motion on objects aids the learning process. It is shown that extra information from the robot sensors provides enough information to localise an object and distinguish it from the background. Also, that decisive planning allows the object to be separated and observed from a variety of dierent poses, giving a good foundation to build a robust classication model. Contributions include a new segmentation algorithm, a new classication model for object learning, and a method for allowing a robot to supervise its own learning in cluttered and dynamic environments.
696

The identification of geometric errors in five-axis machine tools using the telescoping magnetic ballbar

Flynn, Joseph January 2016 (has links)
To maximise productivity and reduce scrap in high-value, low-volume production, five-axis machine tool (5A-MT) motion accuracy must be verified quickly and reliably. Numerous metrology instruments have been developed to measure errors arising from geometric imperfections within and between machine tool axes (amongst other sources). One example is the TMBB, which is becoming an increasingly popular instrument to measure both linear and rotary axis errors. This research proposes new TMBB measurement technique to rapidly, accurately and reliably measure all position-independent rotary axis errors in a 5A-MT. In this research two literature reviews have been conducted. The findings informed the subsequent development of a virtual machine tool (VMT). This VMT was used to capture the effects of rotary and linear axis position-independent geometric errors, and apparatus set-up errors on a variety of candidate measurement routines. This new knowledge then informed the design of an experimental methodology to capture specific phenomena that were observed within the VMT on a commercial 5A-MT. Finally, statistical analysis of experimental measurements facilitated a quantification of the repeatability, strengths and limitations of the final testing method concept. The major contribution of this research is the development of a single set-up testing procedure to identify all 5A-MT rotary axis location errors, whilst remaining robust in the presence of set-up and linear axis location errors. Additionally, a novel variance-based sensitivity analysis approach was used to design testing procedures. By considering the effects of extraneous error sources (set-up and linear location) in the design and validation phases, an added robustness was introduced. Furthermore, this research marks the first usage of Monte Carlo uncertainty analysis in conjunction with rotary axis TMBB testing. Experimental evidence has shown that the proposed corrections for set-up and linear axis errors are highly effective and completely indispensable in rotary axis testing of this kind. However, further development of the single set-up method is necessary, as geometric errors cannot always be measured identically at different testing locations. This has highlighted the importance of considering the influences on 5A-MT component errors on testing results, as the machine tool axes cannot necessarily be modelled as straight lines.
697

Machine learning and forward looking information in option prices

Hu, Qi January 2018 (has links)
The use of forward-looking information from option prices attracted a lot of attention after the 2008 financial crisis, which highlighting the difficulty of using historical data to predict extreme events. Although a considerable number of papers investigate extraction of forward-information from cross-sectional option prices, Figlewski (2008) argues that it is still an open question and none of the techniques is clearly superior. This thesis focuses on getting information from option prices and investigates two broad topics: applying machine learning in extracting state price density and recovering natural probability from option prices. The estimation of state price density (often described as risk-neutral density in the option pricing litera- ture) is of considerable importance since it contains valuable information about investors' expectations and risk preferences. However, this is a non-trivial task due to data limitation and complex arbitrage-free constraints. In this thesis, I develop a more efficient linear programming support vector machine (L1-SVM) estimator for state price density which incorporates no-arbitrage restrictions and bid-ask spread. This method does not depend on a particular approximation function and framework and is, therefore, universally applicable. In a parallel empirical study, I apply the method to options on the S&P 500, showing it to be comparatively accurate and smooth. In addition, since the existing literature has no consensus about what information is recovered from The Recovery Theorem, I empirically examine this recovery problem in a continuous diffusion setting. Using the market data of S&P 500 index option and synthetic data generated by Ornstein-Uhlenbeck (OU) process, I show that the recovered probability is not the real-world probability. Finally, to further explain why The Recovery Theorem fails and show the existence of associated martingale component, I demonstrate a example bivariate recovery.
698

A constraint-based approach for assessing the capabilities of existing designs to handle product variation

Matthews, Jason Anthony January 2007 (has links)
All production machinery is designed with an inherent capability to handle slight variations in product. This is initially achieved by simply providing adjustments to allow, for example, changes that occur in pack sizes to be accommodated, through user settings or complete sets of change parts. By the appropriate use of these abilities most variations in product can be handled. However when extreme conditions of setups, major changes in product size and configuration, are considered there is no guarantee that the existing machines are able to cope. The problem is even more difficult to deal with when completely new product families are proposed to be made on an existing product line. Such changes in product range are becoming more common as producers respond to demands for ever increasing customization and product differentiation. An issue exists due to the lack of knowledge on the capabilities of the machines being employed. This often forces the producer to undertake a series of practical product trials. These however can only be undertaken once the product form has been decided and produced in sufficient numbers. There is then little opportunity to make changes that could greatly improve the potential output of the line and reduce waste. There is thus a need for a supportive modelling approach that allows the effect of variation in products to be analyzed together with an understanding of the manufacturing machine capability. Only through their analysis and interaction can the capabilities be fully understood and refined to make production possible. This thesis presents a constraint-based approach that offers a solution to the problems above. While employing this approach it has been shown that, a generic process can be formed to identify the limiting factors (constraints) of variant products to be processed. These identified constraints can be mapped to form the potential limits of performance for the machine. The limits of performance of a system (performance envelopes) can be employed to assess the design capability to cope with product variation. The approach is successfully demonstrated on three industrial case studies.
699

SRML: Space Radio Machine Learning

Ferreira, Paulo Victor Rodrigues 27 April 2017 (has links)
Space-based communications systems to be employed by future artificial satellites, or spacecraft during exploration missions, can potentially benefit from software-defined radio adaptation capabilities. Multiple communication requirements could potentially compete for radio resources, whose availability of which may vary during the spacecraft's operational life span. Electronic components are prone to failure, and new instructions will eventually be received through software updates. Consequently, these changes may require a whole new set of near-optimal combination of parameters to be derived on-the-fly without instantaneous human interaction or even without a human in-the-loop. Thus, achieving a sufficiently set of radio parameters can be challenging, especially when the communication channels change dynamically due to orbital dynamics as well as atmospheric and space weather-related impairments. This dissertation presents an analysis and discussion regarding novel algorithms proposed in order to enable a cognition control layer for adaptive communication systems operating in space using an architecture that merges machine learning techniques employing wireless communication principles. The proposed cognitive engine proof-of-concept reasons over time through an efficient accumulated learning process. An implementation of the conceptual design is expected to be delivered to the SDR system located on the International Space Station as part of an experimental program. To support the proposed cognitive engine algorithm development, more realistic satellite-based communications channels are proposed along with rain attenuation synthesizers for LEO orbits, channel state detection algorithms, and multipath coefficients function of the reflector's electrical characteristics. The achieved performance of the proposed solutions are compared with the state-of-the-art, and novel performance benchmarks are provided for future research to reference.
700

Change-points Estimation in Statistical Inference and Machine Learning Problems

Zhang, Bingwen 14 August 2017 (has links)
"Statistical inference plays an increasingly important role in science, finance and industry. Despite the extensive research and wide application of statistical inference, most of the efforts focus on uniform models. This thesis considers the statistical inference in models with abrupt changes instead. The task is to estimate change-points where the underlying models change. We first study low dimensional linear regression problems for which the underlying model undergoes multiple changes. Our goal is to estimate the number and locations of change-points that segment available data into different regions, and further produce sparse and interpretable models for each region. To address challenges of the existing approaches and to produce interpretable models, we propose a sparse group Lasso (SGL) based approach for linear regression problems with change-points. Then we extend our method to high dimensional nonhomogeneous linear regression models. Under certain assumptions and using a properly chosen regularization parameter, we show several desirable properties of the method. We further extend our studies to generalized linear models (GLM) and prove similar results. In practice, change-points inference usually involves high dimensional data, hence it is prone to tackle for distributed learning with feature partitioning data, which implies each machine in the cluster stores a part of the features. One bottleneck for distributed learning is communication. For this implementation concern, we design communication efficient algorithm for feature partitioning data sets to speed up not only change-points inference but also other classes of machine learning problem including Lasso, support vector machine (SVM) and logistic regression."

Page generated in 0.0404 seconds