• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7434
  • 1103
  • 1048
  • 794
  • 476
  • 291
  • 237
  • 184
  • 90
  • 81
  • 63
  • 52
  • 44
  • 43
  • 42
  • Tagged with
  • 14409
  • 9227
  • 3943
  • 2366
  • 1925
  • 1915
  • 1721
  • 1624
  • 1514
  • 1439
  • 1374
  • 1354
  • 1341
  • 1275
  • 1269
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
821

The pseudo-rigid-body model for dynamic predictions of macro and micro compliant mechanisms /

Lyon, Scott M. January 2003 (has links) (PDF)
Thesis (Ph. D.)--Brigham Young University. Dept. of Mechanical Engineering, 2003. / Includes bibliographical references (p. 151-156).
822

Design and analysis of end-effector systems for scribing on silicon /

Cannon, Bennion R. January 2003 (has links) (PDF)
Thesis (M.S.)--Brigham Young University. Dept. of Mechanical Engineering, 2003. / Includes bibliographical references (p. 109-110).
823

Sequence classification and melody tracks selection /

Tang, Fung, Michael, January 2001 (has links)
Thesis (M. Phil.)--University of Hong Kong, 2002. / Includes bibliographical references (leaves 107-109).
824

Probabilistic rank aggregation for multiple SVM ranking /

Cheung, Chi-Wai. January 2009 (has links)
Includes bibliographical references (p. 38-40).
825

Fabric wrinkle characterization and classification using modified wavelet coefficients and support-vector-machine classifiers

Sun, Jingjing 03 August 2012 (has links)
Wrinkling caused in wearing and laundry procedures is one of the most important performance properties of a fabric. Visual examination performed by trained experts is a routine wrinkle evaluation method in textile industry, however, this subjective evaluation is time-consuming. The need for objective, automatic and efficient methods of wrinkle evaluation has been increasing remarkably in recent years. In the present thesis, a wavelet transform based imaging analysis method was developed to measure the 2D fabric surface data captured by an infrared imaging system. After decomposing the fabric image by the Haar wavelet transform algorithm, five parameters were defined based on modified wavelet coefficients to describe wrinkling features, such as orientation, hardness, density and contrast. The wrinkle parameters provide useful information for textile, appliance, and detergent manufactures who study wrinkling behaviors of fabrics. A Support-Vector-Machine based classification scheme was developed for automatic wrinkle rating. Both linear kernel and radial-basis-function (RBF) kernel functions were used to achieve a higher rating accuracy. The effectiveness of this evaluation method was tested by 300 images of five selected fabric types with different fiber contents, weave structures, colors and laundering cycles. The results show agreement between the proposed wavelet-based automatic assessment and experts’ visual ratings. / text
826

Autonomous sensor and action model learning for mobile robots

Stronger, Daniel Adam 06 September 2012 (has links)
Autonomous mobile robots have the potential to be extremely beneficial to society due to their ability to perform tasks that are difficult or dangerous for humans. These robots will necessarily interact with their environment through the two fundamental processes of acting and sensing. Robots learn about the state of the world around them through their sensations, and they influence that state through their actions. However, in order to interact with their environment effectively, these robots must have accurate models of their sensors and actions: knowledge of what their sensations say about the state of the world and how their actions affect that state. A mobile robot’s action and sensor models are typically tuned manually, a brittle and laborious process. The robot’s actions and sensors may change either over time from wear or because of a novel environment’s terrain or lighting. It is therefore valuable for the robot to be able to autonomously learn these models. This dissertation presents a methodology that enables mobile robots to learn their action and sensor models starting without an accurate estimate of either model. This methodology is instantiated in three robotic scenarios. First, an algorithm is presented that enables an autonomous agent to learn its action and sensor models in a class of one-dimensional settings. Experimental tests are performed on a four-legged robot, the Sony Aibo ERS-7, walking forward and backward at different speeds while facing a fixed landmark. Second, a probabilistically motivated model learning algorithm is presented that operates on the same robot walking in two dimensions with arbitrary combinations of forward, sideways, and turning velocities. Finally, an algorithm is presented to learn the action and sensor models of a very different mobile robot, an autonomous car. / text
827

Mining statistical correlations with applications to software analysis

Davis, Jason Victor 12 October 2012 (has links)
Machine learning, data mining, and statistical methods work by representing real-world objects in terms of feature sets that best describe them. This thesis addresses problems related to inferring and analyzing correlations among such features. The contributions of this thesis are two-fold: we develop formulations and algorithms for addressing correlation mining problems, and we also provide novel applications of our methods to statistical software analysis domains. We consider problems related to analyzing correlations via unsupervised approaches, as well as algorithms that infer correlations using fully-supervised or semi-supervised information. In the context of correlation analysis, we propose the problem of correlation matrix clustering which employs a k-means style algorithm to group sets of correlations in an unsupervised manner. Fundamental to this algorithm is a measure for comparing correlations called the log-determinant (LogDet) divergence, and a primary contribution of this thesis is that of interpreting and analyzing this measure in the context of information theory and statistics. Additionally based on the LogDet divergence, we present a metric learning problem called Information-Theoretic Metric Learning which uses semi-supervised or fully-supervised data to infer correlations for parametrization of a Mahalanobis distance metric. We also consider the problem of learning Mahalanobis correlation matrices in the presence of high dimensions when the number of pairwise correlations can grow very large. In validating our correlation mining methods, we consider two in-depth and real-world statistical software analysis problems: software error reporting and unit test prioritization. In the context of Clarify, we investigate two types of correlation mining applications: metric learning for nearest neighbor software support, and decision trees for error classification. We show that our metric learning algorithms can learn program-specific similarity models for more accurate nearest neighbor comparisons. In the context of decision tree learning, we address the problem of learning correlations with associated feature costs, in particular, the overhead costs of software instrumentation. As our second application, we present a unit test ordering algorithm which uses clustering and nearest neighbor algorithms, along with a metric learning component, to efficiently search and execute large unit test suites. / text
828

Learning with high-dimensional noisy data

Chen, Yudong 25 September 2013 (has links)
Learning an unknown parameter from data is a problem of fundamental importance across many fields of engineering and science. Rapid development in information technology allows a large amount of data to be collected. The data is often highly non-uniform and noisy, sometimes subject to gross errors and even direct manipulations. Data explosion also highlights the importance of the so-called high-dimensional regime, where the number of variables might exceed the number of samples. Extracting useful information from the data requires high-dimensional learning algorithms that are robust to noise. However, standard algorithms for the high-dimensional regime are often brittle to noise, and the suite of techniques developed in Robust Statistics are often inapplicable to large and high-dimensional data. In this thesis, we study the problem of robust statistical learning in high-dimensions from noisy data. Our goal is to better understand the behaviors and effect of noise in high-dimensional problems, and to develop algorithms that are statistically efficient, computationally tractable, and robust to various types of noise. We forge into this territory by considering three important sub-problems. We first look at the problem of recovering a sparse vector from a few linear measurements, where both the response vector and the covariate matrix are subject to noise. Both stochastic and arbitrary noise are considered. We show that standard approaches are inadequate in these settings. We then develop robust efficient algorithms that provably recover the support and values of the sparse vector under different noise models and require minimum knowledge of the nature of the noise. Next, we study the problem of recovering a low-rank matrix from partially observed entries, with some of the observations arbitrarily corrupted. We consider the entry-wise corruption setting where no row or column has too many entries corrupted, and provide performance guarantees for a natural convex relaxation approach. Our unified guarantees cover both randomly and deterministically located corruptions, and improve upon existing results. We then turn to the column-wise corruption case where all observations from some columns are arbitrarily contaminated. We propose a new convex optimization approach and show that it simultaneously identify the corrupted columns and recover unobserved entries in the uncorrupted columns. Lastly, we consider the graph clustering problem, i.e., arranging the nodes of a graph into clusters such that there are relatively dense connections inside the clusters and sparse connections across different clusters. We propose a semi-random Generalized Stochastic Blockmodel for clustered graphs and develop a new algorithm based on convexified maximum likelihood estimators. We provide theoretical performance guarantees which recover, and sometimes improve on, all exiting results for the classical stochastic blockmodel, the planted k-clique model and the planted coloring models. We extend our algorithm to the case where the clusters are allowed to overlap with each other, and provide theoretical characterization of the performance of the algorithm. A further extension is studied when the graph may change over time. We develop new approaches to incorporate the time dynamics and show that it can identify stable overlapping communities in real-world time-evolving graphs. / text
829

Crunch the market : a Big Data approach to trading system optimization

Mauldin, Timothy Allan 23 April 2014 (has links)
Due to the size of data needed, running software to analyze and tuning intraday trading strategies can take large amounts of time away from analysts, who would like to be able to evaluate strategies and optimize strategy parameters very quickly, ideally in the blink of an eye. Fortunately, Big Data technologies are evolving rapidly and can be leveraged for these purposes. These technologies include software systems for distributed computing, parallel hardware, and on demand computing resources in the cloud. This report presents a distributed software system for trading strategy analysis. It also demonstrates the effectiveness of Machine Learning techniques in decreasing parameter optimization workload. The results from tests run on two different commercial cloud service providers show linear scalability when analyzing intraday trading strategies. / text
830

Efficient shared object space support for distributed Java virtual machine

Lam, King-tin., 林擎天. January 2012 (has links)
Given the popularity of Java, extending the standard Java virtual machine (JVM) to become cluster-aware effectively brings the vision of transparent horizontal scaling of applications to fruition. With a set of cluster-wide JVMs orchestrated as a virtually single system, thread-level parallelism in Java is no longer confined to one multiprocessor. An unmodified multithreaded Java application running on such a Distributed JVM (DJVM) can scale out transparently, tapping into the vast computing power of the cluster. While this notion creates an easy-to-use and powerful parallel programming paradigm, research on DJVMs has remained largely at the proof-of-concept stage where successes were proven using trivial scientific computing workloads only. Real-life Java applications with commercial server workloads have not been well-studied on DJVMs. Their natures including complex and sometimes huge object graphs, irregular access patterns and frequent synchronizations are key scalability hurdles. To design a scalable DJVM for real-life applications, we identify three major unsolved issues calling for a top-to-bottom overhaul of traditional systems. First, we need a more time- and space-efficient cache coherence protocol to support fine-grained object sharing over the distributed shared heap. The recent prevalence of concurrent data structures with heavy use of volatile fields has added complications to the matter. Second, previous generations of DJVMs lack true support for memory-intensive applications. While the network-wide aggregated physical memory can be huge, mutual sharing of huge object graphs like Java collections may cause nodes to eventually run out of local heap space because the cached copies of remote objects, linked by active references, can’t be arbitrarily discarded. Third, thread affinity, which determines the overall communication cost, is vital to the DJVM performance. Data access locality can be improved by collocating highly-correlated threads, via dynamic thread migration. Tracking inter-thread correlations trades profiling costs for reduced object misses. Unfortunately, profiling techniques like active correlation tracking used in page-based DSMs would entail prohibitively high overheads and low accuracy when ported to fine-grained object-based DJVMs. This dissertation presents technical contributions towards all these problems. We use a dual-protocol approach to address the first problem. Synchronized (lock-based) and volatile accesses are handled by a home-based lazy release consistency (HLRC) protocol and a sequential consistency (SC) protocol respectively. The two protocols’ metadata are maintained in a conflict-free, memory-efficient manner. With further techniques like hierarchical passing of lock ownerships, the overall communication overheads of fine-grained distributed object sharing are pruned to a minimal level. For the second problem, we develop a novel uncaching mechanism to safely break a huge active object graph. When a JVM instance runs low on free memory, it initiates an uncaching policy, which eagerly assigns nulls to selected reference fields, thus detaching some older or less useful cached objects from the root set for reclamation. Careful orchestration is made between uncaching, local garbage collection and the coherence protocol to avoid possible data races. Lastly, we devise lightweight sampling-based profiling methods to derive inter-thread correlations, and a profile-guided thread migration policy to boost the system performance. Extensive experiments have demonstrated the effectiveness of all our solutions. / published_or_final_version / Computer Science / Doctoral / Doctor of Philosophy

Page generated in 0.3712 seconds