• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 753
  • 163
  • 104
  • 70
  • 57
  • 37
  • 19
  • 16
  • 15
  • 12
  • 11
  • 9
  • 9
  • 7
  • 6
  • Tagged with
  • 1547
  • 175
  • 142
  • 127
  • 125
  • 123
  • 119
  • 119
  • 117
  • 93
  • 92
  • 92
  • 83
  • 82
  • 78
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
361

Power Line For Data Communication : Characterisation And Simulation

Yogesh, S 07 1900 (has links) (PDF)
No description available.
362

Learning with Complex Performance Measures : Theory, Algorithms and Applications

Narasimhan, Harikrishna January 2016 (has links) (PDF)
We consider supervised learning problems, where one is given objects with labels, and the goal is to learn a model that can make accurate predictions on new objects. These problems abound in applications, ranging from medical diagnosis to information retrieval to computer vision. Examples include binary or multiclass classi cation, where the goal is to learn a model that can classify objects into two or more categories (e.g. categorizing emails into spam or non-spam); bipartite ranking, where the goal is to learn a model that can rank relevant objects above the irrelevant ones (e.g. ranking documents by relevance to a query); class probability estimation (CPE), where the goal is to predict the probability of an object belonging to different categories (e.g. probability of an internet ad being clicked by a user). In each case, the accuracy of a model is evaluated in terms of a specified `performance measure'. While there has been much work on designing and analyzing algorithms for different supervised learning tasks, we have complete understanding only for settings where the performance measure of interest is the standard 0-1 or a loss-based classification measure. These performance measures have a simple additive structure, and can be expressed as an expectation of errors on individual examples. However, in many real-world applications, the performance measure used to evaluate a model is often more complex, and does not decompose into a sum or expectation of point-wise errors. These include the binary or multiclass G-mean used in class-imbalanced classification problems; the F1-measure and its multiclass variants popular in text retrieval; and the (partial) area under the ROC curve (AUC) and precision@ employed in ranking applications. How does one design efficient learning algorithms for such complex performance measures, and can these algorithms be shown to be statistically consistent, i.e. shown to converge in the limit of infinite data to the optimal model for the given measure? How does one develop efficient learning algorithms for complex measures in online/streaming settings where the training examples need to be processed one at a time? These are questions that we seek to address in this thesis. Firstly, we consider the bipartite ranking problem with the AUC and partial AUC performance measures. We start by understanding how bipartite ranking with AUC is related to the standard 0-1 binary classification and CPE tasks. It is known that a good binary CPE model can be used to obtain both a good binary classification model and a good bipartite ranking model (formally, in terms of regret transfer bounds), and that a binary classification model does not necessarily yield a CPE model. However, not much is known about other directions. We show that in a weaker sense (where the mapping needed to transform a model from one problem to another depends on the underlying probability distribution), a good bipartite ranking model for AUC can indeed be used to construct a good binary classification model, and also a good binary CPE model. Next, motivated by the increasing number of applications (e.g. biometrics, medical diagnosis, etc.), where performance is measured, not in terms of the full AUC, but in terms of the partial AUC between two false positive rates (FPRs), we design batch algorithms for optimizing partial AUC in any given FPR range. Our algorithms optimize structural support vector machine based surrogates, which unlike for the full AUC; do not admit a straightforward decomposition into simpler terms. We develop polynomial time cutting plane solvers for solving the optimization, and provide experiments to demonstrate the efficacy of our methods. We also present an application of our approach to predicting chemotherapy outcomes for cancer patients, with the aim of improving treatment decisions. Secondly, we develop algorithms for optimizing (surrogates for) complex performance mea-sures in the presence of streaming data. A well-known method for solving this problem for standard point-wise surrogates such as the hinge surrogate, is the stochastic gradient descent (SGD) method, which performs point-wise updates using unbiased gradient estimates. How-ever, this method cannot be applied to complex objectives, as here one can no longer obtain unbiased gradient estimates from a single point. We develop a general stochastic method for optimizing complex measures that avoids point-wise updates, and instead performs gradient updates on mini-batches of incoming points. The method is shown to provably converge for any performance measure that satis es a uniform convergence requirement, such as the partial AUC, precision@ and F1-measure, and in experiments, is often several orders of magnitude faster than the state-of-the-art batch methods, while achieving similar or better accuracies. Moreover, for specific complex binary classification measures, which are concave functions of the true positive rate (TPR) and true negative rate (TNR), we are able to develop stochastic (primal-dual) methods that can indeed be implemented with point-wise updates, by using an adaptive linearization scheme. These methods admit convergence rates that match the rate of the SGD method, and are again several times faster than the state-of-the-art methods. Finally, we look at the design of consistent algorithms for complex binary and multiclass measures. For binary measures, we consider the practically popular plug-in algorithm that constructs a classifier by applying an empirical threshold to a suitable class probability estimate, and provide a general methodology for proving consistency of these methods. We apply this technique to show consistency for the F1-measure, and under a continuity assumption on the distribution, for any performance measure that is monotonic in the TPR and TNR. For the case of multiclass measures, a simple plug-in method is no longer tractable, as in the place of a single threshold parameter, one needs to tune at least as many parameters as the number of classes. Using an optimization viewpoint, we provide a framework for designing learning algorithms for multiclass measures that are general functions of the confusion matrix, and as an instantiation, provide an e cient and provably consistent algorithm based on the bisection method for multiclass measures that are ratio-of-linear functions of the confusion matrix (e.g. micro F1). The algorithm outperforms the state-of-the-art SVMPerf method in terms of both accuracy and running time. Overall, in this thesis, we have looked at various aspects of complex performance measures used in supervised learning problems, leading to several new algorithms that are often significantly better than the state-of-the-art, to improved theoretical understanding of the performance measures studied, and to novel real-life applications of the algorithms developed.
363

High-Speed Testable Radix-2 N-Bit Signed-Digit Adder

Manjuladevi Rajendraprasad, Akshay 27 August 2019 (has links)
No description available.
364

Development and characterization of Ti-Sn-SiC and Ti-Nb-SiC composites by powder metallurgical processing.

Mathebula, Christina 08 1900 (has links)
M. Tech. (Department of Metallurgical Engineering, Faculty of Engineering Technology), Vaal University of Technology. / This work is an investigation in the development and characterisation of porous Ti-Sn-SiC and Ti-Nb-SiC composites. Pure Titanium (Ti), Tin (Sn), Niobium (Nb) and Silicon carbide (SiC) powders were used as starting materials. The Ti-Sn-SiC and Ti-Nb-SiC composites were produced by powder metallurgy (PM) press-and-sinter route. The Sn is an α-phase stabilizer while Nb is a β-phase stabilizer in Ti alloys. A systematic study of binary Ti-Sn and Ti-Nb alloys was conducted with the addition of SiC particles. The addition of Sn influences the microstructure of the titanium alloy. With increasing the percentage of Sn content, the density of the samples decreases on the Ti-Sn alloys. An increase in the Sn content from 10 to 25 wt. % content resulted in decreased hardness. The Ti-Sn binary revealed stability of the HCP phase with increasing composition of the Sn content. The porous structures of the Ti-Sn-SiC composites were evenly distributed throughout the materials. The sintered densities increase from 94.69% to 96.38%. XRD analysis detected the HCP crystal lattice structure for the Ti5.4Sn3.8SiC and Ti5.6-Sn3.8-SiC composites. XRD pattern of the Ti5.8-Sn3.8-SiC reveals both the HCP and FCC crystal structures. The HCP phase has lattice parameters a= 2.920 Å; c=4.620 Å with smaller c/a ratio of 1.589. Additionally, FCC lattice parameter a=5.620 Å Fm-3m # 225 was obtained both for Ti5.8Sn3.8SiC and Ti6.0Sn3.8SiC XRD patterns. On the other hand, Optical microscopy analysis of the Ti-Nb alloys revealed the equiaxed grains composed of the light β-phase segregating on the grain boundaries. The Ti9Nb1 has low Vickers hardness of all alloys while Ti8Nb2 and Ti7.5Nb2.5 alloys are harder due to high amount of Nb content. Generally, the densities of the Ti–Nb alloys increased with increasing Nb content. HCP and BCC phases have the lattice parameters a = 2.951 Å, c = 4.683 Å and 3.268 Å, respectively. An HCP (α′) phase was detected in the Ti8.5Nb1.5 alloy with lattice parameters a = 5.130 Å, c = 9.840 Å while a BCC phase had a = 3.287 Å. The sintered Ti8Nb2 alloy also had the α′-phase with a = 5.141 Å, c = 9.533 Å and BCC phase with a = 3.280 Å lattice parameters. On the contrary, the Ti7.5Nb2.5 alloy formed the α′-phase of a = 5.141 Å, c = 9.533 Å and BCC with a = 3.280 Å lattice parameters. For the 10 and 15 wt.% Nb alloys, very porous structures were observed. The pores appear spherical and widely distributed. As the Nb content is increased to 20 wt.% (Ti7Nb2SiC) and 25 wt.% (Ti7Nb2.5SiC), porosity was minimized. The sintered densities of the Ti-Sn alloys are decreasing from 95.90% to 92.80% with increased amount of Sn in the Ti, while the sintered densities of Ti-Sn-SiC are increasing from 94.69% to 96.38%. The high porosity, which developed in Ti7Nb1SiC and Ti7Nb2.5SiC, affected the densities of these composites. The sintered densities of Ti-Nb alloys are increasing from 92.08% to 97.65% with increased amount of Nb in the Ti. In terms of hardness Ti7Nb1SiC and Ti7Nb2.5SiC resulted in the lowest while Ti7Nb1.5SiC and Ti7Nb2SiC composites were 511.74 HV and 527.678 HV. The porosity levels were increased by the addition of SiC in the Ti-Sn-SiC and Ti-Nb-SiC composites. The XRD analysis revealed phase transformation on the Ti-Nb alloys and Ti-Nb-SiC composites.
365

A classificação das formas binárias aplicada em máquinas de catástrofes /

Oliveira, Alessandra Roberta Custodio de. January 2010 (has links)
Orientador: Eliris Cristina Rizziolli / Banca: Aldício José Miranda / Banca: Carina Alves / Resumo: Este trabalho trata da classificação geométrica das formas binárias quádricas e cúbicas. Além disto, aplicamos esta classificação ao estudo de máquinas de catástrofes / Abstract: This work make reference of the geometry classification of the two variables quadratic, cubic and quartic forma. Before that, the aplication that classification in study of the catastrophe machines / Mestre
366

Comparative study of neural networks and design of experiments to the classification of HIV status / Wilbert Sibanda.

Sibanda, Wilbert January 2013 (has links)
This research addresses the novel application of design of experiment, artificial neural net-works and logistic regression to study the effect of demographic characteristics on the risk of acquiring HIV infection among the antenatal clinic attendees in South Africa. The annual antenatal HIV survey is the only major national indicator for HIV prevalence in South Africa. This is a vital technique to understand the changes in the HIV epidemic over time. The annual antenatal clinic data contains the following demographic characteristics for each pregnant woman; age (herein called mother's age), partner's age (herein father's age), population group (race), level of education, gravidity (number of pregnancies), parity (number of children born), HIV and syphilis status. This project applied a screening design of experiment technique to rank the effects of individual demographic characteristics on the risk of acquiring an HIV infection. There are a various screening design techniques such as fractional or full factorial and Plackett-Burman designs. In this work, a two-level fractional factorial design was selected for the purposes of screening. In addition to screening designs, this project employed response surface methodologies (RSM) to estimate interaction and quadratic effects of demographic characteristics using a central composite face-centered and a Box-Behnken design. Furthermore, this research presents the novel application of multi-layer perceptron’s (MLP) neural networks to model the demographic characteristics of antenatal clinic attendees. A review report was produced to study the application of neural networks to modelling HIV/AIDS around the world. The latter report is important to enhance our understanding of the extent to which neural networks have been applied to study the HIV/AIDS pandemic. Finally, a binary logistic regression technique was employed to benchmark the results obtained by the design of experiments and neural networks methodologies. The two-level fractional factorial design demonstrated that HIV prevalence was highly sensitive to changes in the mother's age (15-55 years) and level of her education (Grades 0-13). The central composite face centered and Box-Behnken designs employed to study the individual and interaction effects of demographic characteristics on the spread of HIV in South Africa, demonstrated that HIV status of an antenatal clinic attendee was highly sensitive to changes in pregnant mother's age and her educational level. In addition, the interaction of the mother's age with other demographic characteristics was also found to be an important determinant of the risk of acquiring an HIV infection. Furthermore, the central composite face centered and Box-Behnken designs illustrated that, individual-ally the pregnant mother's parity and her partner's age had no marked effect on her HIV status. However, the pregnant woman’s parity and her male partner’s age did show marked effects on her HIV status in “two way interactions with other demographic characteristics”. The multilayer perceptron (MLP) sensitivity test also showed that the age of the pregnant woman had the greatest effect on the risk of acquiring an HIV infection, while her gravidity and syphilis status had the lowest effects. The outcome of the MLP modelling produced the same results obtained by the screening and response surface methodologies. The binary logistic regression technique was compared with a Box-Behnken design to further elucidate the differential effects of demographic characteristics on the risk of acquiring HIV amongst pregnant women. The two methodologies indicated that the age of the pregnant woman and her level of education had the most profound effects on her risk of acquiring an HIV infection. To facilitate the comparison of the performance of the classifiers used in this study, a receiver operating characteristics (ROC) curve was applied. Theoretically, an ROC analysis provides tools to select optimal models and to discard suboptimal ones independent from the cost context or the classification distribution. SAS Enterprise MinerTM was employed to develop the required receiver-of-characteristics (ROC) curves. To validate the results obtained by the above classification methodologies, a credit scoring add-on in SAS Enterprise MinerTM was used to build binary target scorecards comprised of HIV positive and negative datasets for probability determination. The process involved grouping variables using weights-of-evidence (WOE), prior to performing a logistic regression to produce predicted probabilities. The process of creating bins for the scorecard enables the study of the inherent relationship between demographic characteristics and an in-dividual’s HIV status. This technique increases the understanding of the risk ranking ability of the scorecard method, while offering an added advantage of being predictive.
367

Comparative study of neural networks and design of experiments to the classification of HIV status / Wilbert Sibanda.

Sibanda, Wilbert January 2013 (has links)
This research addresses the novel application of design of experiment, artificial neural net-works and logistic regression to study the effect of demographic characteristics on the risk of acquiring HIV infection among the antenatal clinic attendees in South Africa. The annual antenatal HIV survey is the only major national indicator for HIV prevalence in South Africa. This is a vital technique to understand the changes in the HIV epidemic over time. The annual antenatal clinic data contains the following demographic characteristics for each pregnant woman; age (herein called mother's age), partner's age (herein father's age), population group (race), level of education, gravidity (number of pregnancies), parity (number of children born), HIV and syphilis status. This project applied a screening design of experiment technique to rank the effects of individual demographic characteristics on the risk of acquiring an HIV infection. There are a various screening design techniques such as fractional or full factorial and Plackett-Burman designs. In this work, a two-level fractional factorial design was selected for the purposes of screening. In addition to screening designs, this project employed response surface methodologies (RSM) to estimate interaction and quadratic effects of demographic characteristics using a central composite face-centered and a Box-Behnken design. Furthermore, this research presents the novel application of multi-layer perceptron’s (MLP) neural networks to model the demographic characteristics of antenatal clinic attendees. A review report was produced to study the application of neural networks to modelling HIV/AIDS around the world. The latter report is important to enhance our understanding of the extent to which neural networks have been applied to study the HIV/AIDS pandemic. Finally, a binary logistic regression technique was employed to benchmark the results obtained by the design of experiments and neural networks methodologies. The two-level fractional factorial design demonstrated that HIV prevalence was highly sensitive to changes in the mother's age (15-55 years) and level of her education (Grades 0-13). The central composite face centered and Box-Behnken designs employed to study the individual and interaction effects of demographic characteristics on the spread of HIV in South Africa, demonstrated that HIV status of an antenatal clinic attendee was highly sensitive to changes in pregnant mother's age and her educational level. In addition, the interaction of the mother's age with other demographic characteristics was also found to be an important determinant of the risk of acquiring an HIV infection. Furthermore, the central composite face centered and Box-Behnken designs illustrated that, individual-ally the pregnant mother's parity and her partner's age had no marked effect on her HIV status. However, the pregnant woman’s parity and her male partner’s age did show marked effects on her HIV status in “two way interactions with other demographic characteristics”. The multilayer perceptron (MLP) sensitivity test also showed that the age of the pregnant woman had the greatest effect on the risk of acquiring an HIV infection, while her gravidity and syphilis status had the lowest effects. The outcome of the MLP modelling produced the same results obtained by the screening and response surface methodologies. The binary logistic regression technique was compared with a Box-Behnken design to further elucidate the differential effects of demographic characteristics on the risk of acquiring HIV amongst pregnant women. The two methodologies indicated that the age of the pregnant woman and her level of education had the most profound effects on her risk of acquiring an HIV infection. To facilitate the comparison of the performance of the classifiers used in this study, a receiver operating characteristics (ROC) curve was applied. Theoretically, an ROC analysis provides tools to select optimal models and to discard suboptimal ones independent from the cost context or the classification distribution. SAS Enterprise MinerTM was employed to develop the required receiver-of-characteristics (ROC) curves. To validate the results obtained by the above classification methodologies, a credit scoring add-on in SAS Enterprise MinerTM was used to build binary target scorecards comprised of HIV positive and negative datasets for probability determination. The process involved grouping variables using weights-of-evidence (WOE), prior to performing a logistic regression to produce predicted probabilities. The process of creating bins for the scorecard enables the study of the inherent relationship between demographic characteristics and an in-dividual’s HIV status. This technique increases the understanding of the risk ranking ability of the scorecard method, while offering an added advantage of being predictive.
368

The Rasch Sampler

Verhelst, Norman D., Hatzinger, Reinhold, Mair, Patrick 22 February 2007 (has links) (PDF)
The Rasch sampler is an efficient algorithm to sample binary matrices with given marginal sums. It is a Markov chain Monte Carlo (MCMC) algorithm. The program can handle matrices of up to 1024 rows and 64 columns. A special option allows to sample square matrices with given marginals and fixed main diagonal, a problem prominent in social network analysis. In all cases the stationary distribution is uniform. The user has control on the serial dependency. (authors' abstract)
369

Supervised Learning Techniques : A comparison of the Random Forest and the Support Vector Machine

Arnroth, Lukas, Fiddler Dennis, Jonni January 2016 (has links)
This thesis examines the performance of the support vector machine and the random forest models in the context of binary classification. The two techniques are compared and the outstanding one is used to construct a final parsimonious model. The data set consists of 33 observations and 89 biomarkers as features with no known dependent variable. The dependent variable is generated through k-means clustering, with a predefined final solution of two clusters. The training of the algorithms is performed using five-fold cross-validation repeated twenty times. The outcome of the training process reveals that the best performing versions of the models are a linear support vector machine and a random forest with six randomly selected features at each split. The final results of the comparison on the test set of these optimally tuned algorithms show that the random forest outperforms the linear kernel support vector machine. The former classifies all observations in the test set correctly whilst the latter classifies all but one correctly. Hence, a parsimonious random forest model using the top five features is constructed, which, to conclude, performs equally well on the test set compared to the original random forest model using all features.
370

Regularized and robust regression methods for high dimensional data

Hashem, Hussein Abdulahman January 2014 (has links)
Recently, variable selection in high-dimensional data has attracted much research interest. Classical stepwise subset selection methods are widely used in practice, but when the number of predictors is large these methods are difficult to implement. In these cases, modern regularization methods have become a popular choice as they perform variable selection and parameter estimation simultaneously. However, the estimation procedure becomes more difficult and challenging when the data suffer from outliers or when the assumption of normality is violated such as in the case of heavy-tailed errors. In these cases, quantile regression is the most appropriate method to use. In this thesis we combine these two classical approaches together to produce regularized quantile regression methods. Chapter 2 shows a comparative simulation study of regularized and robust regression methods when the response variable is continuous. In chapter 3, we develop a quantile regression model with a group lasso penalty for binary response data when the predictors have a grouped structure and when the data suffer from outliers. In chapter 4, we extend this method to the case of censored response variables. Numerical examples on simulated and real data are used to evaluate the performance of the proposed methods in comparisons with other existing methods.

Page generated in 0.0344 seconds