1 |
A Data Cleaning Framework for Trajectory ClusteringIdrissov, Agzam Y. Unknown Date
No description available.
|
2 |
Preprocessing and Postprocessing in Linear OptimizationHuang, Xin 06 1900 (has links)
This thesis gives an overall survey of preprocessing and postprocessing techniques in linear optimization (LO) and its implementations in the software package McMaster Interior Point Method (McIPM).
We first review the basic concepts and theorems in LO. Then we present all the techniques used in preprocessing and the corresponding operations in postprocessing. Further, we discuss the implementation issues in our software development. Finally we test a series of problems from the Netlib test set and compare our results with state of the art software, such as LIPSOL and CPLEX. / Thesis / Master of Science (MS)
|
3 |
Personal identification based on handwritingSaid, Huwida E. S. January 1999 (has links)
No description available.
|
4 |
Předzpracování dat / Data PreprocessingVašíček, Radek January 2008 (has links)
This thesis surveys on problems preprocessing data. Forepart deal with view and description characteristic tests for description attributes, methods for work with data and attributes. Second part work describes work with program Rapidminer. It pays pay attention to single functions preprocessing in this programme describes their function. Third part equate to results with using methods preprocessing and without using data preprocessing.
|
5 |
An investigation of algorithms for the solution of integer programming problemsAbdul-Hamid, Fatimah January 1995 (has links)
No description available.
|
6 |
Systém předzpracování dat pro dobývání znalostí z databází / Systém předzpracování dat pro dobývání znalostí z databázíKotinová, Hana January 2009 (has links)
Abstract Aim of this diploma thesis was to create an aplication for data preprocessing. The aplication uses files in csv format and is useful for preparing data while solving datamining tasks. The aplication was created using the programing language Java. This text discusses problems, their solutions and algorithms associated with data preprocessing and discusses similar systems such as Mining Mart and SumatraTT. A complete aplication user guide is provided in the main part of this text.
|
7 |
Attribution Standardization for Integrated Concurrent EngineeringBaker, Tyson J. 30 June 2005 (has links) (PDF)
Product design is a creative process, often subject to rapid and numerous design change requirements. To facilitate geometric redesign iterations, Parametric Computer-Aided Design CAD) systems were introduced. To manage the numerous product design iterations produced by parametric CAD systems, Product Data Management (PDM) systems were developed to capture, document, and manage each product revision. PDM has proved effective thus far at managing design history. However, PDM is built upon database management systems (DBMS), which have the capability of doing far more than simply managing product revision history. Product data consists not only of the physical geometry used to describe it, but also of a host of non-geometric data. This non-geometric data is referred to as attributes. Examples of attributes include material properties, boundary conditions, finite element mesh information, manufacturing operations, assembly operations, cost, etc. Downstream Computer-Aided Engineering (CAE) applications apply attributes to (preprocess) the geometry to perform their respective operations. These attributes are not permanently associated with the geometry and may have to be recreated each time the geometry changes. Preprocessing for highly complex CAE analyses can sometimes require weeks of effort. An attribution method is presented which addresses the creation, storage, and management issues facing attributes in the CAD and CAE environments. The research conducted explores the use of database management systems for defining, instantiating, and managing attributes in the CAD environment. Downstream CAE applications may then retrieve the attributes from the DBMS to automate preprocessing. The attribution system results in standardized attribute definitions, which forms the basis for communicating attributes universally among different downstream CAE applications.
|
8 |
Preprocessing rules for the dynamic layout problemKanya, Denise L. January 1994 (has links)
No description available.
|
9 |
Axiom relevance decision engine : technical reportFrank, Mario January 2012 (has links)
This document presents an axiom selection technique for classic first order theorem proving based on the relevance of axioms for the proof of a conjecture. It is based on unifiability of predicates and does not need statistical information like symbol frequency. The scope of the technique is the reduction of the set of axioms and the increase of the amount of provable conjectures in a given time. Since the technique generates a subset of the axiom set, it can be used as a preprocessor for automated theorem proving. This technical report describes the conception, implementation and evaluation of ARDE. The selection method, which is based on a breadth-first graph search by unifiability of predicates, is a weakened form of the connection calculus and uses specialised variants or unifiability to speed up the selection. The implementation of the concept is evaluated with comparison to the results of the world championship of theorem provers of the year 2012 (CASC J6). It is shown that both the theorem prover leanCoP which uses the connection calculus and E which uses equality reasoning, can benefit from the selection approach. Also, the evaluation shows that the concept is applyable for theorem proving problems with thousands of formulae and that the selection is independent from the calculus used by the theorem prover. / Dieser technische Report beschreibt die Konzeption, Implementierung und Evaluation eines Verfahrens zur Auswahl von logischen Formeln bezüglich derer Relevanz für den Beweis einer logischen Formel. Das Verfahren wird ausschließlich für die Prädikatenlogik erster Ordnung angewandt, wenngleich es auch für höherstufige Prädikatenlogiken geeignet ist. Das Verfahren nutzt eine unifikationsbasierte Breitensuche im Graphen wobei jeder Knoten im Graphen ein Prädikat und jede existierende Kante eine Unifizierbarkeitsrelation ist. Ziel des Verfahrens ist die Reduktion einer gegebenen Menge von Formeln auf eine für aktuelle Theorembeweiser handhabbare Größe. Daher ist das Verfahren als Präprozess-Schritt für das automatische Theorembeweisen geeignet. Zur Beschleunigung der Suche wird neben der Standard-Unifikation eine abgeschwächte Unifikation verwendet. Das System wurde während der Weltmeisterschaft der Theorembeweiser im Jahre 2014 (CASC J6) in Manchester zusammen mit dem Theorembeweiser leanCoP eingereicht und konnte leanCoP dabei unterstützen, Probleme zu lösen, die leanCoP alleine nicht handhaben kann. Die Tests mit leanCoP und dem Theorembeweiser E im Nachgang zu der Weltmeisterschaft zeigen, dass das Verfahren unabhängig von dem verwendeten Kalkül ist und bei beiden Theorembeweisern positive Auswirkungen auf die Beweisbarkeit von Problemen mit großen Formelmengen hat.
|
10 |
Face Recognition with Preprocessing and Neural NetworksHabrman, David January 2016 (has links)
Face recognition is the problem of identifying individuals in images. This thesis evaluates two methods used to determine if pairs of face images belong to the same individual or not. The first method is a combination of principal component analysis and a neural network and the second method is based on state-of-the-art convolutional neural networks. They are trained and evaluated using two different data sets. The first set contains many images with large variations in, for example, illumination and facial expression. The second consists of fewer images with small variations. Principal component analysis allowed the use of smaller networks. The largest network has 1.7 million parameters compared to the 7 million used in the convolutional network. The use of smaller networks lowered the training time and evaluation time significantly. Principal component analysis proved to be well suited for the data set with small variations outperforming the convolutional network which need larger data sets to avoid overfitting. The reduction in data dimensionality, however, led to difficulties classifying the data set with large variations. The generous amount of images in this set allowed the convolutional method to reach higher accuracies than the principal component method.
|
Page generated in 0.0996 seconds