• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 237
  • 73
  • 30
  • 29
  • 17
  • 10
  • 9
  • 9
  • 6
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 494
  • 494
  • 485
  • 160
  • 137
  • 114
  • 109
  • 83
  • 79
  • 75
  • 74
  • 66
  • 63
  • 58
  • 53
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

TOA Wireless Location Algorithm with NLOS Mitigation Based on LS-SVM in UWB Systems

Lin, Chien-hung 29 July 2008 (has links)
One of the major problems encountered in wireless location is the effect caused by non-line of sight (NLOS) propagation. When the direct path from the mobile station (MS) to base stations (BSs) is blocked by obstacles or buildings, the signal arrival times will delay. That will make the signal measurements include an error due to the excess path propagation. If we use the NLOS signal measurements for localization, that will make the system localization performance reduce greatly. In the thesis, a time-of-arrival (TOA) based location system with NLOS mitigation algorithm is proposed. The proposed method uses least squares-support vector machine (LS-SVM) with optimal parameters selection by particle swarm optimization (PSO) for establishing regression model, which is used in the estimation of propagation distances and reduction of the NLOS propagation errors. By using a weighted objective function, the estimation results of the distances are combined with suitable weight factors, which are derived from the differences between the estimated measurements and the measured measurements. By applying the optimality of the weighted objection function, the method is capable of mitigating the NLOS effects and reducing the propagation range errors. Computer simulation results in ultra-wideband (UWB) environments show that the proposed NLOS mitigation algorithm can reduce the mean and variance of the NLOS measurements efficiently. The proposed method outperforms other methods in improving localization accuracy under different NLOS conditions.
82

Battery Health Estimation in Electric Vehicles

Klass, Verena January 2015 (has links)
For the broad commercial success of electric vehicles (EVs), it is essential to deeply understand how batteries behave in this challenging application. This thesis has therefore been focused on studying automotive lithium-ion batteries in respect of their performance under EV operation. Particularly, the  need  for  simple  methods  estimating  the  state-of-health  (SOH)  of batteries during EV operation has been addressed in order to ensure safe, reliable, and cost-effective EV operation. Within  the  scope  of  this  thesis,  a  method  has  been  developed  that  can estimate the SOH indicators capacity and internal resistance. The method is solely based on signals that are available on-board during ordinary EV operation  such  as  the  measured  current,  voltage,  temperature,  and  the battery  management  system’s  state-of-charge  estimate.  The  approach  is based on data-driven battery models (support vector machines (SVM) or system  identification)  and  virtual  tests  in  correspondence  to  standard performance  tests  as  established  in  laboratory  testing  for  capacity  and resistance determination. The proposed method has been demonstrated for battery data collected in field tests and has also been verified in laboratory. After a first proof-of-concept of the method idea with battery pack data from a plug-in hybrid electric vehicle (PHEV) field test, the method was improved with the help of a laboratory study where battery electric vehicle (BEV) operation of a battery  cell  was  emulated  under  controlled  conditions  providing  a thorough validation possibility. Precise partial capacity and instantaneous resistance  estimations  could  be  derived  and  an  accurate  diffusion resistance estimation was achieved by including a current history variable in the SVM-based model. The dynamic system identification battery model gave precise total resistance estimates as well. The SOH estimation method was also applied to a data set from emulated hybrid electric vehicle (HEV) operation of a battery cell on board a heavy-duty vehicle, where on-board standard  test  validation  revealed  accurate  dynamic  voltage  estimation performance of the applied model even during high-current situations. In order to exhibit the method’s intended implementation, up-to-date SOH indicators have been estimated from driving data during a one-year time period. / <p>QC 20150914</p>
83

A Classification Framework for Imbalanced Data

Phoungphol, Piyaphol 18 December 2013 (has links)
As information technology advances, the demands for developing a reliable and highly accurate predictive model from many domains are increasing. Traditional classification algorithms can be limited in their performance on highly imbalanced data sets. In this dissertation, we study two common problems when training data is imbalanced, and propose effective algorithms to solve them. Firstly, we investigate the problem in building a multi-class classification model from imbalanced class distribution. We develop an effective technique to improve the performance of the model by formulating the problem as a multi-class SVM with an objective to maximize G-mean value. A ramp loss function is used to simplify and solve the problem. Experimental results on multiple real-world datasets confirm that our new method can effectively solve the multi-class classification problem when the datasets are highly imbalanced. Secondly, we explore the problem in learning a global classification model from distributed data sources with privacy constraints. In this problem, not only data sources have different class distributions but combining data into one central data is also prohibited. We propose a privacy-preserving framework for building a global SVM from distributed data sources. Our new framework avoid constructing a global kernel matrix by mapping non-linear inputs to a linear feature space and then solve a distributed linear SVM from these virtual points. Our method can solve both imbalance and privacy problems while achieving the same level of accuracy as regular SVM. Finally, we extend our framework to handle high-dimensional data by utilizing Generalized Multiple Kernel Learning to select a sparse combination of features and kernels. This new model produces a smaller set of features, but yields much higher accuracy.
84

A Hybrid Risk Model for Hip Fracture Prediction

Jiang, Peng January 2015 (has links)
Hip fracture has long been considered as the most serious consequence of osteoporosis, which includes chronic pain, disability, and even death. In the elderly population, a femur fracture is very common. It is assessed that 50% of women aged 50 or older may experience a hip fracture in their remaining life. Hip fracture is among the most common injuries and can lead to substantial morbidity and mortality. In the US alone, over 250,000 hip fractures occur each year and this number is expected to double by the year 2040. Statistics indicate that over 20% of people who experience a hip fracture die within one year and only 25% have a total recovery. Femur fractures are now becoming a major social and economic burden on the health care system. In practice, it is very difficult to predict the femur fracture risks. One of the main reasons is that there is not a robust and easy-to-get measure to quantify the strength of the bone. Clinicians use bone mineral density (BMD) as an indicator of osteoporosis and fracture risk. Several studies showed that BMD cannot be used alone to identify bone strength. In fact, the majority of patients who suffer from fractures have normal or even higher BMD scores. There are a large number of risk factors that contribute to the occurrence of femur fracture, which should also be involved in predicting hip fracture risks. For example, age, weight, height, ethnicity and so on. Some of the factors might not have been identified yet. Thus, there will be a high level of uncertainty in the clinical dataset, which makes it difficult to construct and validate a hip risk prediction model. The objective of the dissertation is to construct an improved hip fracture risk prediction model. Due to the difficulty of obtaining experimental or clinical data, computational simulations might help increase the predictive ability of the risk model. In this research, the hip fracture risk model is based on a support vector machine (SVM) trained using a clinical dataset from the Women's Health Initiative (WHI). In order to improve the SVM-based hip fracture risk model, data from a fully parameterized finite element (FE) model is used to supplement the clinical dataset. This FE model allows one to simulate a wide range of geometries and material properties in the hip region, and provides a measure of risk based on mechanical quantities (e.g., strain). This dissertation presents new approaches to fuse the clinical data with the FE data in order to improve the predictive capability of the hip fracture risk prediction model. Two approaches are introduced in this dissertation to construct a hybrid risk model: an "augmented space" approach and a "computational patients" approach. This work has led to the construction of a new online hip fracture risk calculator with free access.
85

Screening Web Breaks in a Pressroom by Soft Computing

Ahmad, Alzghoul January 2008 (has links)
Web breaks are considered as one of the most significant runnability problems in a pressroom. This work concerns the analysis of relation between various parameters (variables) characterizing the paper, printing press, the printing process and the web break occurrence. A large number of variables, 61 in total, obtained off-line as well as measured online during the printing process are used in the investigation. Each paper reel is characterized by a vector x of 61 components. Two main approaches are explored. The first one treats the problem as a data classification task into "break" and "non break" classes. The procedures of classifier training, the selection of relevant input variables and the selection of hyper-parameters of the classifier are aggregated into one process based on genetic search. The second approach combines procedures of genetic search based variable selection and data mapping into a low dimensional space. The genetic search process results into a variable set providing the best mapping according to some quality function. The empirical study was performed using data collected at a pressroom in Sweden. The total number of data points available for the experiments was equal to 309. Amongst those, only 37 data points represent the web break cases. The results of the investigations have shown that the linear relations between the independent variables and the web break frequency are not strong. Three important groups of variables were identified, namely Lab data (variables characterizing paper properties and measured off-line in a paper mill lab), Ink registry (variables characterizing operator actions aimed to adjust ink registry) and Web tension. We found that the most important variables are: Ink registry Y LS MD (adjustments of yellow ink registry in machine direction on the lower paper side), Air permeability (character- izes paper porosity), Paper grammage, Elongation MD, and four variables characterizing web tension: Moment mean, Min sliding Mean, Web tension variance, and Web tension mean. The proposed methods were helpful in finding the variables influencing the occurrence of web breaks and can also be used for solving other industrial problems.
86

DETECTION OF ROOF BOUNDARIES USING LIDAR DATA AND AERIAL PHOTOGRAPHY

Gombos, Andrew David 01 January 2010 (has links)
The recent growth in inexpensive laser scanning sensors has created entire fields of research aimed at processing this data. One application is determining the polygonal boundaries of roofs, as seen from an overhead view. The resulting building outlines have many commercial as well as military applications. My work in this area has created a segmentation algorithm where the descriptive features are computationally and theoretically simpler than previous methods. A support vector machine is used to segment data points using these features, and their use is not common for roof detection to date. Despite the simplicity of the feature calculations, the accuracy of our algorithm is similar to previous work. I also describe a basic polygonal extraction method, which is acceptable for basic roofs.
87

REGION-COLOR BASED AUTOMATED BLEEDING DETECTION IN CAPSULE ENDOSCOPY VIDEOS

2014 June 1900 (has links)
Capsule Endoscopy (CE) is a unique technique for facilitating non-invasive and practical visualization of the entire small intestine. It has attracted a critical mass of studies for improvements. Among numerous studies being performed in capsule endoscopy, tremendous efforts are being made in the development of software algorithms to identify clinically important frames in CE videos. This thesis presents a computer-assisted method which performs automated detection of CE video-frames that contain bleeding. Specifically, a methodology is proposed to classify the frames of CE videos into bleeding and non-bleeding frames. It is a Support Vector Machine (SVM) based supervised method which classifies the frames on the basis of color features derived from image-regions. Image-regions are characterized on the basis of statistical features. With 15 available candidate features, an exhaustive feature-selection is followed to obtain the best feature subset. The best feature-subset is the combination of features that has the highest bleeding discrimination ability as determined by the three performance-metrics: accuracy, sensitivity and specificity. Also, a ground truth label annotation method is proposed in order to partially automate delineation of bleeding regions for training of the classifier. The method produced promising results with sensitivity and specificity values up to 94%. All the experiments were performed separately for RGB and HSV color spaces. Experimental results show the combination of the mean planes in red and green planes to be the best feature-subset in RGB (Red-Green-Blue) color space and the combination of the mean values of all three planes of the color space to be the best feature-subset in HSV (Hue-Saturation-Value).
88

REGION-COLOR BASED AUTOMATED BLEEDING DETECTION IN CAPSULE ENDOSCOPY VIDEOS

2014 June 1900 (has links)
Capsule Endoscopy (CE) is a unique technique for facilitating non-invasive and practical visualization of the entire small intestine. It has attracted a critical mass of studies for improvements. Among numerous studies being performed in capsule endoscopy, tremendous efforts are being made in the development of software algorithms to identify clinically important frames in CE videos. This thesis presents a computer-assisted method which performs automated detection of CE video-frames that contain bleeding. Specifically, a methodology is proposed to classify the frames of CE videos into bleeding and non-bleeding frames. It is a Support Vector Machine (SVM) based supervised method which classifies the frames on the basis of color features derived from image-regions. Image-regions are characterized on the basis of statistical features. With 15 available candidate features, an exhaustive feature-selection is followed to obtain the best feature subset. The best feature-subset is the combination of features that has the highest bleeding discrimination ability as determined by the three performance-metrics: accuracy, sensitivity and specificity. Also, a ground truth label annotation method is proposed in order to partially automate delineation of bleeding regions for training of the classifier. The method produced promising results with sensitivity and specificity values up to 94%. All the experiments were performed separately for RGB and HSV color spaces. Experimental results show the combination of the mean planes in red and green planes to be the best feature-subset in RGB (Red-Green-Blue) color space and the combination of the mean values of all three planes of the color space to be the best feature-subset in HSV (Hue-Saturation-Value).
89

Analysing E-mail Text Authorship for Forensic Purposes

Corney, Malcolm W. January 2003 (has links)
E-mail has become the most popular Internet application and with its rise in use has come an inevitable increase in the use of e-mail for criminal purposes. It is possible for an e-mail message to be sent anonymously or through spoofed servers. Computer forensics analysts need a tool that can be used to identify the author of such e-mail messages. This thesis describes the development of such a tool using techniques from the fields of stylometry and machine learning. An author's style can be reduced to a pattern by making measurements of various stylometric features from the text. E-mail messages also contain macro-structural features that can be measured. These features together can be used with the Support Vector Machine learning algorithm to classify or attribute authorship of e-mail messages to an author providing a suitable sample of messages is available for comparison. In an investigation, the set of authors may need to be reduced from an initial large list of possible suspects. This research has trialled authorship characterisation based on sociolinguistic cohorts, such as gender and language background, as a technique for profiling the anonymous message so that the suspect list can be reduced.
90

Einsatzpotential von support vector machines (SVM)-Klassifikation für Scoring-Fragestellungen im Database-Marketing empirische Untersuchung am Beispiel der Kündigungsprognose von Zeitschriftenabonnements

Zimmermann, Martin January 2008 (has links)
Zugl.: Jena, Univ., Diss., 2008

Page generated in 0.0584 seconds