Spelling suggestions: "subject:"[een] MACHINE"" "subject:"[enn] MACHINE""
51 |
Aligning Sheboygan Area School District's metals/manufacturing machine tool curriculum to meet local needsJuckem, John R. January 2005 (has links) (PDF)
Thesis, PlanB (M.S.)--University of Wisconsin--Stout, 2005. / Includes bibliographical references.
|
52 |
Creating diverse ensemble classifiers to reduce supervisionMelville, Prem Noel, January 1900 (has links) (PDF)
Thesis (Ph. D.)--University of Texas at Austin, 2005. / Vita. Includes bibliographical references.
|
53 |
Regularized adaptation : theory, algorithms, and applications /Li, Xiao, January 2007 (has links)
Thesis (Ph. D.)--University of Washington, 2007. / Vita. Includes bibliographical references (p. 132-146).
|
54 |
Task-oriented learning of structured probability distributionsBouchacourt, Diane January 2017 (has links)
Machine learning models automatically learn from historical data to predict unseen events. Such events are often represented as complex multi-dimensional structures. In many cases there is high uncertainty in the prediction process. Research has developed probabilistic models to capture distributions of complex objects, but their learning objective is often agnostic of the evaluation loss. In this thesis, we address the aforementioned defficiency by designing probabilistic methods for structured object prediction that take into account the task at hand. First, we consider that the task at hand is explicitly known, but there is ambiguity in the prediction due to an unobserved (latent) variable. We develop a framework for latent structured output prediction that unifies existing empirical risk minimisation methods. We empirically demonstrate that for large and ambiguous latent spaces, performing prediction by minimising the uncertainty in the latent variable provides more accurate results. Empirical risk minimisation methods predict only a pointwise estimate of the output, however there can be uncertainty on the output value itself. To tackle this deficiency, we introduce a novel type of model to perform probabilistic structured output prediction. Our training objective minimises a dissimilarity coefficient between the data distribution and the model's distribution. This coefficient is defined according to a loss of choice, thereby our objective can be tailored to the task loss. We empirically demonstrate the ability of our model to capture distributions over complex objects. Finally, we tackle a setting where the task loss is implicitly expressed. Specifically, we consider the case of grouped observations. We propose a new model for learning a representation of the data that decomposes according to the semantics behind this grouping, while allowing efficient test-time inference. We experimentally demonstrate that our model learns a disentangled and controllable representation, leverages grouping information when available, and generalises to unseen observations.
|
55 |
Learning in the real world environment: a classification model based on sensitivity to within-dimension and between-category variation of feature frequenciesLam, Newman Ming Ki 22 June 2018 (has links)
Research on machine learning has taken numerous different
directions. The present study focussed on the microstructural
characteristics of learning systems. It was
postulated that learning systems consist of a macrostructure
which controls the flow of information, and a
micro-structure which manipulates information for decision
making. A review of the literature suggested that the basic
function of the micro-structure of learning systems was to
make a choice among a set of alternatives. This decision
function was then equated with the task of making
classification decisions. On the basis of the requirements
for practical learning systems, the feature frequency
approach was chosen for model development. An analysis of
the feature frequency approach indicated that an effective
model must be sensitive to both within-dimension and
between-category variations in frequencies. A model was
then developed to provide for such sensitivities. The model
was based on the Bayes' Theorem with an assumption of
uniform prior probability of occurrence for the categories.
This model was tested using data collected for
neuropsychological diagnosis of children. Results of the
tests showed that the model was capable of learning and
provided a satisfactory level of performance. The
performance of the model was compared with that of other
models designed for the same purpose. The other models
included NEXSYS, a rule-based system specially design for
this type of diagnosis, discriminant analysis, which is a
statistical technique widely used for pattern recognition,
and neural networks, which attempt to simulate the neural
activities of the brain. Results of the tests showed that
the model's performance was comparable to that of the other
models. Further analysis indicated that the model has certain advantages in that it has a simple structure, is
capable of explaining its decisions, and is more efficient
than the other models. / Graduate
|
56 |
Computer numerical controlled drilling machine interfaced to a computer aided design packageMostert, Pierre Frans January 1996 (has links)
Thesis (MTech (Electrical Engineering))-- Peninsula Technikon, Cape Town, 1996 / In answer to the needs of the growing number of smaller
manufacturers of Printed Circuit Boards, the "Computer
Numerical Controlled Drill interfaced to a Computer Aided
Design software package" was conceptualised. This development
would considerably improve both productivity and efficiency
and consequently would improve cost effectiveness.
The smaller manufacturer is unable to afford the more
sophisticated Computer Numerical Controlled Drill equipment
and the software packages which drive them. It was necessary
for the Computer Numerical Controlled machine to possess all
the capabilities and versatility of the more sophisticated
and expensive equipment, and yet still be affordable to the
smaller manufacturer. Furthermore, since such equipment would
need to be operated in fairly hostile environments, it would
need to be robust and reliable.
This interdisciplinary development has required the knowledge
and skill from many different engineering fields, and made
them readily available to industry which otherwise would not
have had access to them.
The South African economy at present , has a growing need for
smaller and medium business enterprises, and the needs of
these must be appropriately served. It is envisaged that this
development will make a significant contribution to greater economic activity.
|
57 |
Translation hypotheses re-ranking for statistical machine translationLiu, Yan January 2017 (has links)
University of Macau / Faculty of Science and Technology / Department of Computer and Information Science
|
58 |
Representing spatial experience and solving spatial problems in a simulated robot environmentRowat, Peter Forbes January 1979 (has links)
This thesis is concerned with spatial aspects of perception and action in a simple robot. To this end, the problem of designing a robot-controller for a robot in a simulated robot-environment system is considered. The environment is a two-dimensional tabletop with movable polygonal shapes on it. The robot has an eye which 'saes' an area of the tabletop centred on itself, with a resolution which decreases from the centre to the periphery. Algorithms are presented for simulating the motion and collision of two dimensional shapes in this environment. These algorithms use representations of shape both as a sequence, of boundary points and as a region in a digital image. A method is outlined for constructing and updating the world model of the robot as new visual input is received from the eye. It is proposed that, in the world model, the spatial problems of path-finding and object-moving be based on algorithms that find the skeleton of the shape of empty space and of the shape of the moved object. A new iterative algorithm for finding the skeleton, with the property that the skeleton of a connected shape is connected, is presented. This is applied to path-finding and simple object-moving problems. Finally, directions for future work, are outlined. / Science, Faculty of / Computer Science, Department of / Graduate
|
59 |
Computation by one-way multihead marker automataGorlick, Michael Martin January 1978 (has links)
A new family of automata, one-way multihead marker automata, is introduced. Intuitively these devices consist of one or more read heads travelling in a single direction on a bounded tape and a finite fixed pool of markers which may be deposited on the tape, sensed, and later removed. The major results obtained are:
(i) A one-way n-head k-marker automaton with distinguished markers (each marker is recognizably distinct) may be simulated by a one-way n-head
(k+1)-marker automaton with uniform markers (one marker cannot be told from another);
(ii) Minor restrictions in pebble use and head movement do not significantly affect the recognition power of these devices, consequently it is possible to obtain normal forms for devices with either distinguished or uniform markers;
(iii) If p(x) is a polynomial with positive integer coefficients of degree k>0 then the language {0P(n) |n Ɵ {0,1,2,...}} is recognized by a deterministic
s.one-way k-head (k-1)-marker automaton;
(iv) There exists a class of one-letter languages, characterized by finite linear difference equations, which are recognized by a deterministic one-way n-head 2-marker automaton. Members of this class include the fibonacci numbers and all languages {0(kn) |n Ɵ {0,1,2,...}}, k a fixed natural number. This class is of particular interest since it contains languages not generable by any polynomial, proving that a complete characterization of the one-letter languages recognized by one-way multihead marker automata can not rest upon the polynomial languages alone. / Science, Faculty of / Computer Science, Department of / Graduate
|
60 |
Machine learning approaches to manufacturing and materials: Applications to semi-supervised, unbalanced and heterogeneous data problemsKarkare, Rasika S 30 July 2019 (has links)
The objective of this thesis is to use machine learning and deep learning techniques for the quality assurance of metal casting processes. Metal casting can be defined as a process in which liquid metal is poured into a mold of a desired shape and allowed to solidify. The process is completed after ejection of the final solidified component, also known as a casting, out of the mold. There may be undesired irregularities in the metal casting process known as casting defects. Among the defects that are found, porosity is considered to be a major defect, which is difficult to detect, until the end of the manufacturing cycle. When there are small voids, holes or pockets found within the metal, porosity defect occurs. It is important to control and alleviate porosity below certain permissible thresholds, depending on the product that is being manufactured. If the foundry process can be modeled using machine learning approaches, to predict the state of the casting prior to completion of the casting process, it would save the foundry the inspection and testing of the casting, which requires specific attention of the staff and expensive machinery for testing. Moreover, if the casting fails the quality test, then it would be rendered useless. This is one of the major issues for the foundries today. The main aim of this project, is to make predictions about the quality of metal cast components. We determine whether under certain given conditions and parameters, a cast component would pass or fail the quality test. Although this thesis focuses on porosity defects, machine learning and deep learning techniques can be used to model any other kinds of defects such as shrinkage defects, metal pouring defects or any metallurgical defects. The other important objective is to identify the most important parameters in this casting process, that are responsible for the porosity control and ultimately the quality of the cast component. The challenges faced during the data analysis while dealing with a small sized, unbalanced, heterogeneous and semi-supervised dataset, such as this one, are also covered. We compare the results obtained using different machine learning techniques in terms of F1 score, precision and recall, among other metrics, on unseen test data post cross validation. Finally, the conclusions and scope for the future work are also discussed.
|
Page generated in 0.0406 seconds