• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3096
  • 562
  • 237
  • 231
  • 196
  • 127
  • 100
  • 83
  • 83
  • 83
  • 83
  • 83
  • 83
  • 30
  • 29
  • Tagged with
  • 5729
  • 5729
  • 2115
  • 1569
  • 1345
  • 1181
  • 889
  • 881
  • 782
  • 754
  • 698
  • 612
  • 559
  • 535
  • 532
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
171

Machine Learning Methods for 3D Object Classification and Segmentation

Le, Truc Duc 16 April 2019 (has links)
<p> Object understanding is a fundamental problem in computer vision and it has been extensively researched in recent years thanks to the availability of powerful GPUs and labelled data, especially in the context of images. However, 3D object understanding is still not on par with its 2D domain and deep learning for 3D has not been fully explored yet. In this dissertation, I work on two approaches, both of which advances the state-of-the-art results in 3D classification and segmentation.</p><p> The first approach, called MVRNN, is based multi-view paradigm. In contrast to MVCNN which does not generate consistent result across different views, by treating the multi-view images as a temporal sequence, our MVRNN correlates the features and generates coherent segmentation across different views. MVRNN demonstrated state-of-the-art performance on the Princeton Segmentation Benchmark dataset.</p><p> The second approach, called PointGrid, is a hybrid method which combines points and regular grid structure. 3D points can retain fine details but irregular, which is challenge for deep learning methods. Volumetric grid is simple and has regular structure, but does not scale well with data resolution. Our PointGrid, which is simple, allows the fine details to be consumed by normal convolutions under a coarser resolution grid. PointGrid achieved state-of-the-art performance on ModelNet40 and ShapeNet datasets in 3D classification and object part segmentation. </p><p>
172

Predicting National Basketball Association Game Outcomes Using Ensemble Learning Techniques

Valenzuela, Russell 25 April 2019 (has links)
<p> There have been a number of studies that try to predict sporting event outcomes. Most previous research has involved results in football and college basketball. Recent years has seen similar approaches carried out in professional basketball. This thesis attempts to build upon existing statistical techniques and apply them to the National Basketball Association using a synthesis of algorithms as motivation. A number of ensemble learning methods will be utilized and compared in hopes of improving the accuracy of single models. Individual models used in this thesis will be derived from Logistic Regression, Na&iuml;ve Bayes, Random Forests, Support Vector Machines, and Artificial Neural Networks while aggregation techniques include Bagging, Boosting, and Stacking. Data from previous seasons and games from both?players and teams will be used to train models in R.</p><p>
173

Digital camera identification using sensor pattern noise for forensics applications

Lawgaly, Ashref January 2017 (has links)
Nowadays, millions of pictures are shared through the internet without applying any authentication system. This may cause serious problems, particularly in situations where the digital image is an important component of the decision making process for example, child pornography and movie piracy. Motivated by this, the present research investigates the performance of estimating Photo Response Non-Uniformity (PRNU) and developing new estimation approaches to improve the performance of digital source camera identification. The PRNU noise is a sensor pattern noise characterizing the imaging device. Nonetheless, the PRNU estimation procedure is faced with the presence of image-dependent information as well as other non-unique noise components. This thesis primarily focuses on efficiently estimating the physical PRNU components during different stages. First, an image sharpening technique is proposed as a pre-processing approach for source camera identification. The sharpening method aims to amplify the PRNU components for better estimation. In the estimation stage, a new weighted averaging (WA) technique is presented. Most existing PRNU techniques estimate PRNU using the constant averaging of residue signals extracted from a set of images. However, treating all residue signals equally through constant averaging is optimal only if they carry undesirable noise of the same variance. Moreover, an improved version of the locally adaptive discrete cosine transform (LADCT) filter is proposed in the filtering stage to reduce the effect of scene details on noise residues. Finally, the post-estimation stage consists of combining the PRNU estimated from each colour plane aims to reduce the effect of colour interpolation and increasing the amount of physical PRNU components. The aforementioned techniques have been assessed on two image datasets acquired by several camera devices. Experimental results have shown a significant improvement obtained with the proposed enhancements over related state-of-the-art systems. Nevertheless, in this thesis the experiments are not including images taken with various acquisition different resolutions to evaluate the effect of these settings on PRNU performance. Moreover, images captured by scanners, cell phones can be included for a more comprehensive work. Another limitation is that investigating how the improvement may change with JPEG compression or gamma correction. Additionally, the proposed methods have not been considered in cases of geometrical processing, for instance cropping or resizing.
174

Learning to Play Cooperative Games via Reinforcement Learning

Wei, Ermo 02 March 2019 (has links)
<p> Being able to accomplish tasks with multiple learners through learning has long been a goal of the multiagent systems and machine learning communities. One of the main approaches people have taken is reinforcement learning, but due to certain conditions and restrictions, applying reinforcement learning in a multiagent setting has not achieved the same level of success when compared to its single agent counterparts. </p><p> This thesis aims to make coordination better for agents in cooperative games by improving on reinforcement learning algorithms in several ways. I begin by examining certain pathologies that can lead to the failure of reinforcement learning in cooperative games, and in particular the pathology of <i> relative overgeneralization</i>. In relative overgeneralization, agents do not learn to optimally collaborate because during the learning process each agent instead converges to behaviors which are robust in conjunction with the other agent's exploratory (and thus random), rather than optimal, choices. One solution to this is so-called <i>lenient learning</i>, where agents are forgiving of the poor choices of their teammates early in the learning cycle. In the first part of the thesis, I develop a lenient learning method to deal with relative overgeneralization in independent learner settings with small stochastic games and discrete actions. </p><p> I then examine certain issues in a more complex multiagent domain involving parameterized action Markov decision processes, motivated by the RoboCup 2D simulation league. I propose two methods, one batch method and one actor-critic method, based on state of the art reinforcement learning algorithms, and show experimentally that the proposed algorithms can train the agents in a significantly more sample-efficient way than more common methods. </p><p> I then broaden the parameterized-action scenario to consider both repeated and stochastic games with continuous actions. I show how relative overgeneralization prevents the multiagent actor-critic model from learning optimal behaviors and demonstrate how to use Soft Q-Learning to solve this problem in repeated games. </p><p> Finally, I extend imitation learning to the multiagent setting to solve related issues in stochastic games, and prove that given the demonstration from an expert, multiagent Imitation Learning is exactly the multiagent actor-critic model in Maximum Entropy Reinforcement Learning framework. I further show that when demonstration samples meet certain conditions the relative overgeneralization problem can be avoided during the learning process.</p><p>
175

Propagation redundancy in finite domain constraint satisfaction. / CUHK electronic theses & dissertations collection

January 2005 (has links)
A widely adopted approach to solving constraint satisfaction problems combines backtracking tree search with various degrees of constraint propagation for pruning the search space. One common technique to improve the execution efficiency is to add redundant constraints, which are constraints logically implied by others in the problem model and may offer extra information to enhance constraint propagation. However, some redundant constraints are propagation redundant and hence do not contribute additional propagation information to the constraint solver. In this thesis, we propose propagation rules as a tool to compare the propagation strength of constraints, and establish results relating logical and propagation redundancy. / Redundant constraints arise naturally in the process of redundant modeling, where two models of the same problem are connected and combined through channeling constraints. We characterize channeling constraints in terms of restrictive and unrestrictive channel function and give general theorems for proving propagation redundancy of constraints in the combined model. We illustrate, on problems from CSPLib, how detecting and removing propagation redundant constraints can often significantly speed up constraint-solving. / Choi Chiu Wo. / "September 2005." / Adviser: Jimmy Lee. / Source: Dissertation Abstracts International, Volume: 67-07, Section: B, page: 3890. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2005. / Includes bibliographical references (p. 106-117). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstract in English and Chinese. / School code: 1307.
176

Tractable projection-safe soft global constraints in weighted constraint satisfaction.

January 2011 (has links)
Wu, Yi. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2011. / Includes bibliographical references (p. 74-80). / Abstracts in English and Chinese. / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Constraint Satisfaction Problems --- p.1 / Chapter 1.2 --- Weighted Constraint Satisfaction Problems --- p.3 / Chapter 1.3 --- Motivation and Goal --- p.4 / Chapter 1.4 --- Outline of the Thesis --- p.5 / Chapter 2 --- Background --- p.7 / Chapter 2.1 --- Constraint Satisfaction Problems --- p.7 / Chapter 2.1.1 --- Backtracking Tree search --- p.8 / Chapter 2.1.2 --- Local consistencies in CSP --- p.11 / Chapter 2.2 --- Weighted Constraint Satisfaction Problems --- p.18 / Chapter 2.2.1 --- Branch and Bound Search --- p.20 / Chapter 2.2.2 --- Local Consistencies in WCSP --- p.21 / Chapter 2.3 --- Global Constraints --- p.31 / Chapter 3 --- Tractable Projection-Safety --- p.36 / Chapter 3.1 --- Tractable Projection-Safety: Definition and Analysis --- p.37 / Chapter 3.2 --- Polynomially Decomposable Soft Constraints --- p.42 / Chapter 4 --- Examples of Polynomially Decomposable Soft Global Constraints --- p.48 / Chapter 4.1 --- Soft Among Constraint --- p.49 / Chapter 4.2 --- Soft Regular Constraint --- p.51 / Chapter 4.3 --- Soft Grammar Constraint --- p.54 / Chapter 4.4 --- Max_Weight/Min Weight Constraint --- p.57 / Chapter 5 --- Experiments --- p.61 / Chapter 5.1 --- The car Sequencing Problem --- p.61 / Chapter 5.2 --- The nonogram problem --- p.62 / Chapter 5.3 --- Well-Formed Parenthesis --- p.64 / Chapter 5.4 --- Minimum Energy Broadcasting Problem --- p.64 / Chapter 6 --- Related Work --- p.67 / Chapter 6.1 --- WCSP Consistencies --- p.67 / Chapter 6.2 --- Global Constraints . --- p.68 / Chapter 7 --- Conclusion --- p.71 / Chapter 7.1 --- Contributions --- p.71 / Chapter 7.2 --- Future Work --- p.72 / Bibliography --- p.74
177

ANN wave prediction model for winter storms and hurricanes

Kim, Jun-Young. 01 January 2003 (has links)
Currently available wind-wave prediction models require a prohibitive amount of computing time for simulating non-linear wave-wave interactions. Moreover, some parts of wind-wave generation processes are not fully understood yet. For this reason accurate predictions are not always guaranteed. In contrast, Artificial Neural Network (ANN) techniques are designed to recognize the patterns between input and output so that they can save considerable computing time so that real-time wind-wave forecast can be available to the navy and commercial ships. For this reason, this study tries to use ANN techniques to predict waves for winter storms and hurricanes with much less computing time at the five National Oceanic and Atmospheric Administration (NOAA) wave stations along the East Coast of the U.S. from Florida to Maine (station 44007, 44013, 44025, 44009, and 41009). In order to identify prediction error sources of an ANN model, the 100% known wind-wave events simulated from the SMB model were used. The ANN predicted even untrained wind-wave events accurately, and this implied that it could be used for winter-storm and hurricane wave predictions. For the prediction of winter-storm waves, 1999 and 2001 winter-storm events with 403 data points had 1998 winter-storm events with 78 points were prepared for training and validation data sets, respectively. In general, because winter-storms are relatively evenly distributed over a large area and move slowly, wind information (u and v wind components) over a large domain was considered as ANN inputs. When using a 24-hour time-delay to simulate the time required for waves to be fully developed seas, the ANN predicted wave heights (r = 0.88) accurately, but the prediction accuracy of zero-crossing wave periods was much less (r = 0.61). For the prediction of hurricane waves, 15 hurricanes from 1995 to 2001 and Hurricane Bertha in 1998 were prepared for training and validation data sets, respectively. Because hurricanes affect a relatively small domain, move quickly, and change dramatically with time, the location of hurricane centers, the maximum wind speed, central pressure of hurricane centers, longitudinal and latitudinal distance between wave stations and hurricane centers were used as inputs. The ANN predicted wave height accurately when a 24-hour time-delay was used (r = 0.82), but the prediction accuracy of peak-wave periods was much less (r = 0.50). This is because the physical processes of wave periods are more complicated than those of wave heights. This study shows a possibility of an ANN technique as the winter-storm and hurricane-wave prediction model. If more winter-storm and hurricane data can be available, and the prediction of hurricane tracks is possible, we can forecast real-time wind-waves more accurately with less computing time.
178

Object-oriented requirements analysis and design of intelligent computer-integrated manufacturing systems

Unknown Date (has links)
This dissertation addresses the problem of developing intelligent computer integrated manufacturing (ICIM) systems. The research objectives are to provide an object-oriented development methodology for ICIM systems supported with tools and techniques to elicit and specify domain knowledge and information. Currently, no suitable methodologies or modeling techniques exist for realizing ICIM systems. Existing methodologies represent portions of ICIM but lack the richness necessary to conceptualize and implement ICIM systems. / Research contributions include composition of an object-oriented detailed methodology with phases and phase dependencies for requirements analysis and design of ICIM systems and the development of a model, ROADMAP, used for knowledge elicitation and specification of the manufacturing domain. Additional contributions include supporting techniques for control knowledge elicitation, techniques for object specification, and expansion of standard evaluation techniques for design of object-oriented systems. / Source: Dissertation Abstracts International, Volume: 52-06, Section: B, page: 3157. / Major Professor: Abraham Kandel. / Thesis (Ph.D.)--The Florida State University, 1991.
179

Improved learning strategies for small vocabulary automatic speech recognition

Cardin, Régis January 1993 (has links)
No description available.
180

A probabilistic min-max tree /

Kamoun, Olivier January 1992 (has links)
No description available.

Page generated in 0.5274 seconds