21 |
Consideration of certain aspects of tool wear during high speed machiningDumas, Walter Arthur, 1925- January 1961 (has links)
No description available.
|
22 |
A comparative analysis of visual layout techniques as applied to medium size job machine shopsDenenberg, Earle Israel 08 1900 (has links)
No description available.
|
23 |
Real-time multipushdown and multicounter automata networks and hierarchiesDeimel, Lionel Earl January 1975 (has links)
No description available.
|
24 |
Universal multihead automataMartin, Daniel Paul 08 1900 (has links)
No description available.
|
25 |
The design of a hydro-pneumatic universal spar testing machineLewis, Barclay Marion 05 1900 (has links)
No description available.
|
26 |
Reasoning and learning for intelligent agents /Sioutis, Christos. Unknown Date (has links)
Intelligent Agents that operate in dynamic, real-time domains are required to embody complex but controlled behaviours, some of which may not be easily implementable. This thesis investigates the difficulties presented with implementing Intelligent Agents for such environments and makes contributions in the fields of Agent Reasoning, Agent Learning and Agent-Oriented Design in order to overcome some of these difficulties. / The thesis explores the need for incorporating learning into agents. This is done through a comprehensive review of complex application domains where current agent development techniques are insufficient to provide a system of acceptable standard. The theoretical foundations of agent reasoning and learning are reviewed and a critique of reasoning techniques illustrates how humans make decisions. Furthermore, a number of learning and adaptation methods are introduced. The concepts behind Intelligent Agents and the reasons why researchers have recently turned to this technology for implementing complex systems are then reviewed. Overviews of different agent-oriented development paradigms are explored, which include relevant development platforms available for each one. / Previous research on modeling how humans make decisions is investigated, in particular three models are described in detail. A new cognitive, hybrid reasoning model is presented that fuses the three models together to offset the demerits of one model by the merits of another. Due to the additional elements available in the new model, it becomes possible to define how learning can be integrated into the reasoning process. In addition, an abstract framework that implements the reasoning and learning model is defined. This framework hides the complexity of learning and allows for designing agents based on the new reasoning model. / Finally, the thesis contributes the design of an application where learning agents are faced with a rich, real-time environment and are required to work as a teamto achieve a common goal. Detailed algorithmic descriptions of the agent's behaviours as well as a subset of the source code are included in the thesis. The empirical results obtained validate all contributions within the domain of Unreal Tournament. Ultimately, this dissertation demonstrates that if agent reasoning is implemented using a cognitive reasoning model with defined learning goals, an agent can operate effectively in a complex, real-time, collaborative and adversarial environment. / Thesis (PhDComputerSystemsEng)--University of South Australia, 2006.
|
27 |
Reinforcement learning and approximation complexityMcDonald, Matthew A. F Unknown Date (has links)
Many tasks can easily be posed as the problem of responding to the states of an external world with actions that maximise the reward received over time. Algorithms that reliably solve such problems exist. However, their worst-case complexities are typically more than proportional to the size of the state space in which a task is to be performed. Many simple tasks involve enormous numbers of states, which can make the application of such algorithms impractical. This thesis examines reinforcement learning algorithms which effectively learn to perform tasks by constructing mappings from states to suitable actions. In problems involving large numbers of states, these algorithms usually must construct approximate, rather than exact, solutions and the primary issue examined in the thesis is the way in which the complexity of constructing adequate approximations scales as the size of a state space increases. The vast majority of reinforcement learning algorithms operate by constructing estimates of the long-term value of states and using these estimates to select actions. The potential effects of errors in such estimates are examined and shown to be severe. Empirical results are presented which suggest that minor errors are likely to result in significant losses in many problems, and where such losses are most likely to occur. The complexity of constructing estimates accurate enough to prevent significant losses is also examined empirically and shown to be substantial.
|
28 |
Developing machine learning techniques for real world applicationsYao, Jian. January 2006 (has links)
Thesis (Ph. D.)--State University of New York at Binghamton, Computer Science Department, 2006. / Includes bibliographical references.
|
29 |
Generic solid modelling based machining process simulation /El-Mounayri, Hazim A. January 1997 (has links)
Thesis (Ph.D) -- McMaster University, 1997. / Includes bibliographical references (leaves 155-163). Also available via World Wide Web.
|
30 |
The solution paths of multicategory support vector machines algorithm and applications /Cui, Zhenhuan, January 2007 (has links)
Thesis (Ph. D.)--Ohio State University, 2007. / Title from first page of PDF file. Includes bibliographical references (p. 66-68).
|
Page generated in 0.0226 seconds