• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 210
  • 73
  • 14
  • 2
  • Tagged with
  • 298
  • 298
  • 206
  • 184
  • 175
  • 131
  • 121
  • 121
  • 60
  • 36
  • 34
  • 30
  • 29
  • 27
  • 26
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
291

Tail Risk Protection via reproducible data-adaptive strategies

Spilak, Bruno 15 February 2024 (has links)
Die Dissertation untersucht das Potenzial von Machine-Learning-Methoden zur Verwaltung von Schwanzrisiken in nicht-stationären und hochdimensionalen Umgebungen. Dazu vergleichen wir auf robuste Weise datenabhängige Ansätze aus parametrischer oder nicht-parametrischer Statistik mit datenadaptiven Methoden. Da datengetriebene Methoden reproduzierbar sein müssen, um Vertrauen und Transparenz zu gewährleisten, schlagen wir zunächst eine neue Plattform namens Quantinar vor, die einen neuen Standard für wissenschaftliche Veröffentlichungen setzen soll. Im zweiten Kapitel werden parametrische, lokale parametrische und nicht-parametrische Methoden verglichen, um eine dynamische Handelsstrategie für den Schutz vor Schwanzrisiken in Bitcoin zu entwickeln. Das dritte Kapitel präsentiert die Portfolio-Allokationsmethode NMFRB, die durch eine Dimensionsreduktionstechnik hohe Dimensionen bewältigt. Im Vergleich zu klassischen Machine-Learning-Methoden zeigt NMFRB in zwei Universen überlegene risikobereinigte Renditen. Das letzte Kapitel kombiniert bisherige Ansätze zu einer Schwanzrisikoschutzstrategie für Portfolios. Die erweiterte NMFRB berücksichtigt Schwanzrisikomaße, behandelt nicht-lineare Beziehungen zwischen Vermögenswerten während Schwanzereignissen und entwickelt eine dynamische Schwanzrisikoschutzstrategie unter Berücksichtigung der Nicht-Stationarität der Vermögensrenditen. Die vorgestellte Strategie reduziert erfolgreich große Drawdowns und übertrifft andere moderne Schwanzrisikoschutzstrategien wie die Value-at-Risk-Spread-Strategie. Die Ergebnisse werden durch verschiedene Data-Snooping-Tests überprüft. / This dissertation shows the potential of machine learning methods for managing tail risk in a non-stationary and high-dimensional setting. For this, we compare in a robust manner data-dependent approaches from parametric or non-parametric statistics with data-adaptive methods. As these methods need to be reproducible to ensure trust and transparency, we start by proposing a new platform called Quantinar, which aims to set a new standard for academic publications. In the second chapter, we dive into the core subject of this thesis which compares various parametric, local parametric, and non-parametric methods to create a dynamic trading strategy that protects against tail risk in Bitcoin cryptocurrency. In the third chapter, we propose a new portfolio allocation method, called NMFRB, that deals with high dimensions thanks to a dimension reduction technique, convex Non-negative Matrix Factorization. This technique allows us to find latent interpretable portfolios that are diversified out-of-sample. We show in two universes that the proposed method outperforms other classical machine learning-based methods such as Hierarchical Risk Parity (HRP) concerning risk-adjusted returns. We also test the robustness of our results via Monte Carlo simulation. Finally, the last chapter combines our previous approaches to develop a tail-risk protection strategy for portfolios: we extend the NMFRB to tail-risk measures, we address the non-linear relationships between assets during tail events by developing a specific non-linear latent factor model, finally, we develop a dynamic tail risk protection strategy that deals with the non-stationarity of asset returns using classical econometrics models. We show that our strategy is successful at reducing large drawdowns and outperforms other modern tail-risk protection strategies such as the Value-at-Risk-spread strategy. We verify our findings by performing various data snooping tests.
292

Algorithmic classification in tumour spheroid control experiments using time series analysis

Schmied, Jannik 05 June 2024 (has links)
At the forefront of cancer treatment development and evaluation, three-dimensional Tumour Spheroid Control Experiments play a pivotal role in the battle against cancer. Conducting and evaluating in vitro experiments are time-consuming processes. This thesis details the development, implementation, and validation of an algorithmic model that classifies spheroids as either controlled or relapsed by assessing the success of their treatments based on criteria rooted in biological insights. The introduction of this model is crucial for biologists to accurately and efficiently predict treatment efficacy in 3D in vitro experiments. The motivation for this research is driven by the need to improve the objectivity and efficiency of treatment outcome evaluations, which have traditionally depended on manual and subjective assessments by biologists. The research involved creating a comprehensive dataset from multiple 60-day in vitro experiments by combining data from various sources, focusing on the growth dynamics of tumour spheroids subjected to different treatment regimens. Through preprocessing and analysis, growth characteristics were extracted and utilized as input features for the model. A feature selection and optimization technique was applied to refine the software model and improve its predictive accuracy. The model is based on a handful of comprehensive criteria, calibrated by employing a grid search mechanism for hyperparameter tuning to optimize accuracy. The validation process, conducted via independent test sets, confirmed the model’s capability to predict treatment outcomes with a high degree of reliability and an accuracy of about 99%. The findings reveal that algorithmic classification models can make a significant contribution to the standardization and automation of treatment efficacy assessment in tumour spheroid experiments. Not only does this approach reduce the potential for human error and variability, but it also provides a scalable and objective means of evaluating treatment outcomes.:1 Introduction 1.1 Background and Motivation 1.2 Biological Background 1.3 Iteration Methodology 1.4 Objective of the Thesis 2 Definition of basic Notation and Concepts 2.1 Time Series Analysis 2.2 Linear Interpolation 2.3 Simple Exponential Smoothing 2.4 Volume of a Spheroid 2.5 Heavyside Function 2.6 Least Squares Method 2.7 Linear Regression 2.8 Exponential Approximation 2.9 Grid Search 2.10 Binary Regression 2.11 Pearson Correlation Coefficient 3 Observation Data 3.1 General Overview 3.1.1 Structure of the Data 3.1.2 Procedure of Data Processing using 3D-Analysis 3.2 Data Engineering 3.2.1 Data Consolidation and Sanitization 3.2.2 Extension and Interpolation 3.2.3 Variance Reduction 4 Model Development 4.1 Modeling of Various Classification-Relevant Aspects 4.1.1 Primary Criteria 4.1.2 Secondary Criteria 4.1.3 Statistical Learning Approaches 4.2 Day of Relapse Estimation 4.3 Model Implementation 4.3.1 Combination of Approaches 4.3.2 Implementation in Python 4.4 Model Calibration 4.4.1 Consecutive Growth 4.4.2 Quintupling 4.4.3 Secondary Criteria 4.4.4 Combined Approach 5 Model Testing 5.1 Evaluation Methods 5.1.1 Applying the Model to New Data 5.1.2 Spheroid Control Probability 5.1.3 Kaplan-Meier Survival Analysis 5.1.4 Analysis of Classification Mismatches 5.2 Model Benchmark 5.2.1 Comparison to Human Raters 5.2.2 Comparison to Binary Regression Model 5.3 Robustness 5.3.1 Test using different Segmentation 5.3.2 Feature Reduction 5.3.3 Sensitivity 5.3.4 Calibration Templates 6 Discussion 6.1 Practical Application Opportunities 6.2 Evaluation of the Algorithmic Model 6.3 Limitations 7 Conclusion 7.1 Summary 7.2 Future Research Directions / Dreidimensionale Experimente zur Kontrolle von Tumorsphäroiden sind zentral für die Entwicklung und Evaluierung von Krebstherapien. Die Durchführung und Auswertung von In-vitro-Experimenten ist jedoch zeitaufwendig. Diese Arbeit beschreibt die Entwicklung, Implementierung und Validierung eines algorithmischen Modells zur Einstufung von Sphäroiden als kontrolliert oder rezidivierend. Das Modell bewertet den Behandlungserfolg anhand biologisch fundierter Kriterien. Diese Innovation ist entscheidend für die präzise und effiziente Vorhersage der Wirksamkeit von Behandlungen in 3D-In-vitro-Experimenten und zielt darauf ab, die Objektivität und Effizienz der Beurteilung von Behandlungsergebnissen zu verbessern, die traditionell von manuellen, subjektiven Einschätzungen der Biologen abhängen. Die Forschung umfasste die Erstellung eines umfassenden Datensatzes aus mehreren 60-tägigen In-vitro-Experimenten, bei denen die Wachstumsdynamik von Tumorsphäroiden unter verschiedenen Behandlungsschemata untersucht wurde. Durch Vorverarbeitung und Analyse wurden Wachstumscharakteristika extrahiert und als Eingangsmerkmale für das Modell verwendet. Das Modell basiert auf wenigen umfassenden Kriterien, die mithilfe eines Gittersuchmechanismus zur Abstimmung der Hyperparameter kalibriert wurden, um die Genauigkeit zu optimieren. Der Validierungsprozess bestätigte die Fähigkeit des Modells, Behandlungsergebnisse mit hoher Zuverlässigkeit und einer Genauigkeit von etwa 99 % vorherzusagen. Die Ergebnisse zeigen, dass algorithmische Klassifizierungsmodelle einen wesentlichen Beitrag zur Standardisierung und Automatisierung der Bewertung der Behandlungseffektivität in Tumorsphäroid-Experimenten leisten können. Dieser Ansatz verringert nicht nur das Potenzial für menschliche Fehler und Schwankungen, sondern bietet auch ein skalierbares und objektives Mittel zur Bewertung von Behandlungsergebnissen.:1 Introduction 1.1 Background and Motivation 1.2 Biological Background 1.3 Iteration Methodology 1.4 Objective of the Thesis 2 Definition of basic Notation and Concepts 2.1 Time Series Analysis 2.2 Linear Interpolation 2.3 Simple Exponential Smoothing 2.4 Volume of a Spheroid 2.5 Heavyside Function 2.6 Least Squares Method 2.7 Linear Regression 2.8 Exponential Approximation 2.9 Grid Search 2.10 Binary Regression 2.11 Pearson Correlation Coefficient 3 Observation Data 3.1 General Overview 3.1.1 Structure of the Data 3.1.2 Procedure of Data Processing using 3D-Analysis 3.2 Data Engineering 3.2.1 Data Consolidation and Sanitization 3.2.2 Extension and Interpolation 3.2.3 Variance Reduction 4 Model Development 4.1 Modeling of Various Classification-Relevant Aspects 4.1.1 Primary Criteria 4.1.2 Secondary Criteria 4.1.3 Statistical Learning Approaches 4.2 Day of Relapse Estimation 4.3 Model Implementation 4.3.1 Combination of Approaches 4.3.2 Implementation in Python 4.4 Model Calibration 4.4.1 Consecutive Growth 4.4.2 Quintupling 4.4.3 Secondary Criteria 4.4.4 Combined Approach 5 Model Testing 5.1 Evaluation Methods 5.1.1 Applying the Model to New Data 5.1.2 Spheroid Control Probability 5.1.3 Kaplan-Meier Survival Analysis 5.1.4 Analysis of Classification Mismatches 5.2 Model Benchmark 5.2.1 Comparison to Human Raters 5.2.2 Comparison to Binary Regression Model 5.3 Robustness 5.3.1 Test using different Segmentation 5.3.2 Feature Reduction 5.3.3 Sensitivity 5.3.4 Calibration Templates 6 Discussion 6.1 Practical Application Opportunities 6.2 Evaluation of the Algorithmic Model 6.3 Limitations 7 Conclusion 7.1 Summary 7.2 Future Research Directions
293

Differentiation of Occlusal Discolorations and Carious Lesions with Hyperspectral Imaging In Vitro

Vosahlo, Robin, Golde, Jonas, Walther, Julia, Koch, Edmund, Hannig, Christian, Tetschke, Florian 19 April 2024 (has links)
Stains and stained incipient lesions can be challenging to differentiate with established clinical tools. New diagnostic techniques are required for improved distinction to enable early noninvasive treatment. This in vitro study evaluates the performance of artificial intelligence (AI)-based classification of hyperspectral imaging data for early occlusal lesion detection and differentiation from stains. Sixty-five extracted permanent human maxillary and mandibular bicuspids and molars (International Caries Detection and Assessment System [ICDAS] II 0–4) were imaged with a hyperspectral camera (Diaspective Vision TIVITA® Tissue, Diaspective Vision, Pepelow, Germany) at a distance of 350 mm, acquiring spatial and spectral information in the wavelength range 505–1000 nm; 650 fissural spectra were used to train classification algorithms (models) for automated distinction between stained but sound enamel and stained lesions. Stratified 10-fold cross-validation was used. The model with the highest classification performance, a fine k-nearest neighbor classification algorithm, was used to classify five additional tooth fissural areas. Polarization microscopy of ground sections served as reference. Compared to stained lesions, stained intact enamel showed higher reflectance in the wavelength range 525–710 nm but lower reflectance in the wavelength range 710–1000 nm. A fine k-nearest neighbor classification algorithm achieved the highest performance with a Matthews correlation coefficient (MCC) of 0.75, a sensitivity of 0.95 and a specificity of 0.80 when distinguishing between intact stained and stained lesion spectra. The superposition of color-coded classification results on further tooth occlusal projections enabled qualitative assessment of the entire fissure’s enamel health. AI-based evaluation of hyperspectral images is highly promising as a complementary method to visual and radiographic examination for early occlusal lesion detection.
294

A data driven machine learning approach to differentiate between autism spectrum disorder and attention-deficit/hyperactivity disorder based on the best-practice diagnostic instruments for autism

Wolff, Nicole, Kohls, Gregor, Mack, Judith T., Vahid, Amirali, Elster, Erik M., Stroth, Sanna, Poustka, Luise, Kuepper, Charlotte, Roepke, Stefan, Kamp-Becker, Inge, Roessner, Veit 22 April 2024 (has links)
Autism spectrum disorder (ASD) and attention-deficit/hyperactivity disorder (ADHD) are two frequently co-occurring neurodevelopmental conditions that share certain symptomatology, including social difficulties. This presents practitioners with challenging (differential) diagnostic considerations, particularly in clinically more complex cases with co-occurring ASD and ADHD. Therefore, the primary aim of the current study was to apply a data-driven machine learning approach (support vector machine) to determine whether and which items from the best-practice clinical instruments for diagnosing ASD (ADOS, ADI-R) would best differentiate between four groups of individuals referred to specialized ASD clinics (i.e., ASD, ADHD, ASD + ADHD, ND = no diagnosis). We found that a subset of five features from both ADOS (clinical observation) and ADI-R (parental interview) reliably differentiated between ASD groups (ASD & ASD + ADHD) and non-ASD groups (ADHD & ND), and these features corresponded to the social-communication but also restrictive and repetitive behavior domains. In conclusion, the results of the current study support the idea that detecting ASD in individuals with suspected signs of the diagnosis, including those with co-occurring ADHD, is possible with considerably fewer items relative to the original ADOS/2 and ADI-R algorithms (i.e., 92% item reduction) while preserving relatively high diagnostic accuracy. Clinical implications and study limitations are discussed.
295

Classifiers for Discrimination of Significant Protein Residues and Protein-Protein Interaction Using Concepts of Information Theory and Machine Learning / Klassifikatoren zur Unterscheidung von Signifikanten Protein Residuen und Protein-Protein Interaktion unter Verwendung von Informationstheorie und maschinellem Lernen

Asper, Roman Yorick 26 October 2011 (has links)
No description available.
296

Generische Verkettung maschineller Ansätze der Bilderkennung durch Wissenstransfer in verteilten Systemen: Am Beispiel der Aufgabengebiete INS und ACTEv der Evaluationskampagne TRECVid

Roschke, Christian 08 November 2021 (has links)
Der technologische Fortschritt im Bereich multimedialer Sensorik und zugehörigen Methoden zur Datenaufzeichnung, Datenhaltung und -verarbeitung führt im Big Data-Umfeld zu immensen Datenbeständen in Mediatheken und Wissensmanagementsystemen. Zugrundliegende State of the Art-Verarbeitungsalgorithmen werden oftmals problemorientiert entwickelt. Aufgrund der enormen Datenmengen lassen sich nur bedingt zuverlässig Rückschlüsse auf Güte und Anwendbarkeit ziehen. So gestaltet sich auch die intellektuelle Erschließung von großen Korpora schwierig, da die Datenmenge für valide Aussagen nahezu vollumfänglich semi-intellektuell zu prüfen wäre, was spezifisches Fachwissen aus der zugrundeliegenden Datendomäne ebenso voraussetzt wie zugehöriges Verständnis für Datenhandling und Klassifikationsprozesse. Ferner gehen damit gesonderte Anforderungen an Hard- und Software einher, welche in der Regel suboptimal skalieren, da diese zumeist auf Multi-Kern-Rechnern entwickelt und ausgeführt werden, ohne dabei eine notwendige Verteilung vorzusehen. Folglich fehlen Mechanismen, um die Übertragbarkeit der Verfahren auf andere Anwendungsdomänen zu gewährleisten. Die vorliegende Arbeit nimmt sich diesen Herausforderungen an und fokussiert auf die Konzeptionierung und Entwicklung einer verteilten holistischen Infrastruktur, die die automatisierte Verarbeitung multimedialer Daten im Sinne der Merkmalsextraktion, Datenfusion und Metadatensuche innerhalb eines homogenen Systems ermöglicht. Der Fokus der vorliegenden Arbeit liegt in der Konzeptionierung und Entwicklung einer verteilten holistischen Infrastruktur, die die automatisierte Verarbeitung multimedialer Daten im Sinne der Merkmalsextraktion, Datenfusion und Metadatensuche innerhalb eines homogenen aber zugleich verteilten Systems ermöglicht. Dabei sind Ansätze aus den Domänen des Maschinellen Lernens, der Verteilten Systeme, des Datenmanagements und der Virtualisierung zielführend miteinander zu verknüpfen, um auf große Datenmengen angewendet, evaluiert und optimiert werden zu können. Diesbezüglich sind insbesondere aktuelle Technologien und Frameworks zur Detektion von Mustern zu analysieren und einer Leistungsbewertung zu unterziehen, so dass ein Kriterienkatalog ableitbar ist. Die so ermittelten Kriterien bilden die Grundlage für eine Anforderungsanalyse und die Konzeptionierung der notwendigen Infrastruktur. Diese Architektur bildet die Grundlage für Experimente im Big Data-Umfeld in kontextspezifischen Anwendungsfällen aus wissenschaftlichen Evaluationskampagnen, wie beispielsweise TRECVid. Hierzu wird die generische Applizierbarkeit in den beiden Aufgabenfeldern Instance Search und Activity in Extended Videos eruiert.:Abbildungsverzeichnis Tabellenverzeichnis 1 Motivation 2 Methoden und Strategien 3 Systemarchitektur 4 Instance Search 5 Activities in Extended Video 6 Zusammenfassung und Ausblick Anhang Literaturverzeichnis / Technological advances in the field of multimedia sensing and related methods for data acquisition, storage, and processing are leading to immense amounts of data in media libraries and knowledge management systems in the Big Data environment. The underlying modern processing algorithms are often developed in a problem-oriented manner. Due to the enormous amounts of data, reliable statements about quality and applicability can only be made to a limited extent. Thus, the intellectual exploitation of large corpora is also difficult, as the data volume would have to be analyzed for valid statements, which requires specific expertise from the underlying data domain as well as a corresponding understanding of data handling and classification processes. In addition, there are separate requirements for hardware and software, which usually scale in a suboptimal manner while being developed and executed on multicore computers without provision for the required distribution. Consequently, there is a lack of mechanisms to ensure the transferability of the methods to other application domains. The focus of this work is the design and development of a distributed holistic infrastructure that enables the automated processing of multimedia data in terms of feature extraction, data fusion, and metadata search within a homogeneous and simultaneously distributed system. In this context, approaches from the areas of machine learning, distributed systems, data management, and virtualization are combined in order to be applicable on to large data sets followed by evaluation and optimization procedures. In particular, current technologies and frameworks for pattern recognition are to be analyzed and subjected to a performance evaluation so that a catalog of criteria can be derived. The criteria identified in this way form the basis for a requirements analysis and the conceptual design of the infrastructure required. This architecture builds the base for experiments in the Big Data environment in context-specific use cases from scientific evaluation campaigns, such as TRECVid. For this purpose, the generic applicability in the two task areas Instance Search and Activity in Extended Videos is elicited.:Abbildungsverzeichnis Tabellenverzeichnis 1 Motivation 2 Methoden und Strategien 3 Systemarchitektur 4 Instance Search 5 Activities in Extended Video 6 Zusammenfassung und Ausblick Anhang Literaturverzeichnis
297

Application of the Duality Theory

Lorenz, Nicole 15 August 2012 (has links) (PDF)
The aim of this thesis is to present new results concerning duality in scalar optimization. We show how the theory can be applied to optimization problems arising in the theory of risk measures, portfolio optimization and machine learning. First we give some notations and preliminaries we need within the thesis. After that we recall how the well-known Lagrange dual problem can be derived by using the general perturbation theory and give some generalized interior point regularity conditions used in the literature. Using these facts we consider some special scalar optimization problems having a composed objective function and geometric (and cone) constraints. We derive their duals, give strong duality results and optimality condition using some regularity conditions. Thus we complete and/or extend some results in the literature especially by using the mentioned regularity conditions, which are weaker than the classical ones. We further consider a scalar optimization problem having single chance constraints and a convex objective function. We also derive its dual, give a strong duality result and further consider a special case of this problem. Thus we show how the conjugate duality theory can be used for stochastic programming problems and extend some results given in the literature. In the third chapter of this thesis we consider convex risk and deviation measures. We present some more general measures than the ones given in the literature and derive formulas for their conjugate functions. Using these we calculate some dual representation formulas for the risk and deviation measures and correct some formulas in the literature. Finally we proof some subdifferential formulas for measures and risk functions by using the facts above. The generalized deviation measures we introduced in the previous chapter can be used to formulate some portfolio optimization problems we consider in the fourth chapter. Their duals, strong duality results and optimality conditions are derived by using the general theory and the conjugate functions, respectively, given in the second and third chapter. Analogous calculations are done for a portfolio optimization problem having single chance constraints using the general theory given in the second chapter. Thus we give an application of the duality theory in the well-developed field of portfolio optimization. We close this thesis by considering a general Support Vector Machines problem and derive its dual using the conjugate duality theory. We give a strong duality result and necessary as well as sufficient optimality conditions. By considering different cost functions we get problems for Support Vector Regression and Support Vector Classification. We extend the results given in the literature by dropping the assumption of invertibility of the kernel matrix. We use a cost function that generalizes the well-known Vapnik's ε-insensitive loss and consider the optimization problems that arise by using this. We show how the general theory can be applied for a real data set, especially we predict the concrete compressive strength by using a special Support Vector Regression problem.
298

Application of the Duality Theory: New Possibilities within the Theory of Risk Measures, Portfolio Optimization and Machine Learning

Lorenz, Nicole 28 June 2012 (has links)
The aim of this thesis is to present new results concerning duality in scalar optimization. We show how the theory can be applied to optimization problems arising in the theory of risk measures, portfolio optimization and machine learning. First we give some notations and preliminaries we need within the thesis. After that we recall how the well-known Lagrange dual problem can be derived by using the general perturbation theory and give some generalized interior point regularity conditions used in the literature. Using these facts we consider some special scalar optimization problems having a composed objective function and geometric (and cone) constraints. We derive their duals, give strong duality results and optimality condition using some regularity conditions. Thus we complete and/or extend some results in the literature especially by using the mentioned regularity conditions, which are weaker than the classical ones. We further consider a scalar optimization problem having single chance constraints and a convex objective function. We also derive its dual, give a strong duality result and further consider a special case of this problem. Thus we show how the conjugate duality theory can be used for stochastic programming problems and extend some results given in the literature. In the third chapter of this thesis we consider convex risk and deviation measures. We present some more general measures than the ones given in the literature and derive formulas for their conjugate functions. Using these we calculate some dual representation formulas for the risk and deviation measures and correct some formulas in the literature. Finally we proof some subdifferential formulas for measures and risk functions by using the facts above. The generalized deviation measures we introduced in the previous chapter can be used to formulate some portfolio optimization problems we consider in the fourth chapter. Their duals, strong duality results and optimality conditions are derived by using the general theory and the conjugate functions, respectively, given in the second and third chapter. Analogous calculations are done for a portfolio optimization problem having single chance constraints using the general theory given in the second chapter. Thus we give an application of the duality theory in the well-developed field of portfolio optimization. We close this thesis by considering a general Support Vector Machines problem and derive its dual using the conjugate duality theory. We give a strong duality result and necessary as well as sufficient optimality conditions. By considering different cost functions we get problems for Support Vector Regression and Support Vector Classification. We extend the results given in the literature by dropping the assumption of invertibility of the kernel matrix. We use a cost function that generalizes the well-known Vapnik's ε-insensitive loss and consider the optimization problems that arise by using this. We show how the general theory can be applied for a real data set, especially we predict the concrete compressive strength by using a special Support Vector Regression problem.

Page generated in 0.1881 seconds