• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 812
  • 337
  • 294
  • 27
  • 17
  • 13
  • 8
  • 7
  • 7
  • 6
  • 4
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 1689
  • 536
  • 529
  • 433
  • 389
  • 275
  • 275
  • 263
  • 231
  • 228
  • 224
  • 223
  • 192
  • 178
  • 148
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Modeling, Estimation, and Control of Robot-Soil Interactions

Hong, Won 01 September 2001 (has links)
This thesis presents the development of hardware, theory, and experimental methods to enable a robotic manipulator arm to interact with soils and estimate soil properties from interaction forces. Unlike the majority of robotic systems interacting with soil, our objective is parameter estimation, not excavation. To this end, we design our manipulator with a flat plate for easy modeling of interactions. By using a flat plate, we take advantage of the wealth of research on the similar problem of earth pressure on retaining walls. There are a number of existing earth pressure models. These models typically provide estimates of force which are in uncertain relation to the true force. A recent technique, known as numerical limit analysis, provides upper and lower bounds on the true force. Predictions from the numerical limit analysis technique are shown to be in good agreement with other accepted models. Experimental methods for plate insertion, soil-tool interface friction estimation, and control of applied forces on the soil are presented. In addition, a novel graphical technique for inverting the soil models is developed, which is an improvement over standard nonlinear optimization. This graphical technique utilizes the uncertainties associated with each set of force measurements to obtain all possible parameters which could have produced the measured forces. The system is tested on three cohesionless soils, two in a loose state and one in a loose and dense state. The results are compared with friction angles obtained from direct shear tests. The results highlight a number of key points. Common assumptions are made in soil modeling. Most notably, the Mohr-Coulomb failure law and perfectly plastic behavior. In the direct shear tests, a marked dependence of friction angle on the normal stress at low stresses is found. This has ramifications for any study of friction done at low stresses. In addition, gradual failures are often observed for vertical tools and tools inclined away from the direction of motion. After accounting for the change in friction angle at low stresses, the results show good agreement with the direct shear values.
62

2D-3D Rigid-Body Registration of X-Ray Fluoroscopy and CT Images

Zollei, Lilla 01 August 2001 (has links)
The registration of pre-operative volumetric datasets to intra- operative two-dimensional images provides an improved way of verifying patient position and medical instrument loca- tion. In applications from orthopedics to neurosurgery, it has a great value in maintaining up-to-date information about changes due to intervention. We propose a mutual information- based registration algorithm to establish the proper align- ment. For optimization purposes, we compare the perfor- mance of the non-gradient Powell method and two slightly di erent versions of a stochastic gradient ascent strategy: one using a sparsely sampled histogramming approach and the other Parzen windowing to carry out probability density approximation. Our main contribution lies in adopting the stochastic ap- proximation scheme successfully applied in 3D-3D registra- tion problems to the 2D-3D scenario, which obviates the need for the generation of full DRRs at each iteration of pose op- timization. This facilitates a considerable savings in compu- tation expense. We also introduce a new probability density estimator for image intensities via sparse histogramming, de- rive gradient estimates for the density measures required by the maximization procedure and introduce the framework for a multiresolution strategy to the problem. Registration results are presented on uoroscopy and CT datasets of a plastic pelvis and a real skull, and on a high-resolution CT- derived simulated dataset of a real skull, a plastic skull, a plastic pelvis and a plastic lumbar spine segment.
63

Bagging Regularizes

Poggio, Tomaso, Rifkin, Ryan, Mukherjee, Sayan, Rakhlin, Alex 01 March 2002 (has links)
Intuitively, we expect that averaging --- or bagging --- different regressors with low correlation should smooth their behavior and be somewhat similar to regularization. In this note we make this intuition precise. Using an almost classical definition of stability, we prove that a certain form of averaging provides generalization bounds with a rate of convergence of the same order as Tikhonov regularization --- similar to fashionable RKHS-based learning algorithms.
64

Risk Bounds for Mixture Density Estimation

Rakhlin, Alexander, Panchenko, Dmitry, Mukherjee, Sayan 27 January 2004 (has links)
In this paper we focus on the problem of estimating a bounded density using a finite combination of densities from a given class. We consider the Maximum Likelihood Procedure (MLE) and the greedy procedure described by Li and Barron. Approximation and estimation bounds are given for the above methods. We extend and improve upon the estimation results of Li and Barron, and in particular prove an $O(\\frac{1}{\\sqrt{n}})$ bound on the estimation error which does not depend on the number of densities in the estimated combination.
65

Steps towards an empirically responsible AI : a methodological and theoretical framework

Svedberg, Peter O.S. January 2004 (has links)
Initially we pursue a minimal model of a cognitive system. This in turn form the basis for the development of amethodological and theoretical framework. Two methodological requirements of the model are that explanation be from the perspective of the phenomena, and that we have structural determination. The minimal model is derived from the explanatory side of a biologically based cognitive science. Fransisco Varela is our principal source for this part. The model defines the relationship between a formally defined autonomous system and an environment, in such a way as to generate the world of the system, its actual environment. The minimal model is a modular explanation in that we find it on different levels in bio-cognitive systems, from the cell to small social groups. For the latter and for the role played by artefactual systems we bring in Edwin Hutchins' observational study of a cognitive system in action. This necessitates the introduction of a complementary form of explanation. A key aspect of Hutchins' findings is the social domain as environment for humans. Aspects of human cognitive abilities usually attributed to the person are more properly attributed to the social system, including artefactual systems. Developing the methodological and theoretical framework means making a transition from the bio-cognitive to the computational. The two complementary forms of explanation are important for the ability to develop a methodology that supports the construction of actual systems. This has to be able to handle the transition from external determination of a system in design to internal determination (autonomy) in operation. Once developed, the combined framework is evaluated in an application area. This is done by comparing the standard conception of the Semantic Web with how this notion looks from the perspective of the framework. This includes the development of the methodological framework as a metalevel external knowledge representation. A key difference between the two approaches is the directness by which the semantic is approached. Our perspective puts the focus on interaction and the structural regularities this engenders in the external representation. Regularities which in turn form the basis for machine processing. In this regard we see the relationship between representation and inference as analogous to the relationship between environment and system. Accordingly we have the social domain as environment for artefactual agents. For human level cognitive abilities the social domain as environment is important. We argue that a reasonable shortcut to systems we can relate to, about that very domain, is for artefactual agents to have an external representation of the social domain as environment.
66

Beteendevarierande agenter med stokastiska tillståndsmaskiner

Magnusson, Martin January 2009 (has links)
Tillståndsmaskiner var en av de första teknikerna som användes för att skapa AI (Artificiell Intelligens) i dataspel och är fortfarande än av de vanligaste teknikerna. Men under senare år har flera nya tekniker, såsom ANN (Artificiella Neurala Nätverk), börjat användas för att skapa mer avancerad AI i dataspel. Många anser att tillståndsmaskiner inte kan skapa tillräckligt smarta agenter och att agenterna ofta blir förutsägbara. Generellt är tillståndsmaskiner inte lämpade att använda när man vill ha många agenter med olika beteende eftersom det ofta kräver speciell kod för varje unik agent. Detta arbete undersöker möjligheterna att skapa beteendevarierande agenter, utan att behöva skriva unik kod för varje beteende genom att endast använda stokastiska tillståndsmaskiner och mallar för att styra beteenden. Resultatet visar att genom att variera sannolikheterna för övergångar i den stokastiska tillståndsmaskinen kan agenternas agerande till stor grad påverkas. Tekniken arbetet tar upp är ett bra alternativ för de som snabbt och enkelt vill få in agenter med AI i sina spel och samtidigt ha möjligheten att kunna variera deras beteenden.
67

Beteendevarierande agenter med stokastiska tillståndsmaskiner

Magnusson, Martin January 2009 (has links)
<p>Tillståndsmaskiner var en av de första teknikerna som användes för att skapa AI (Artificiell Intelligens) i dataspel och är fortfarande än av de vanligaste teknikerna. Men under senare år har flera nya tekniker, såsom ANN (Artificiella Neurala Nätverk), börjat användas för att skapa mer avancerad AI i dataspel. Många anser att tillståndsmaskiner inte kan skapa tillräckligt smarta agenter och att agenterna ofta blir förutsägbara. Generellt är tillståndsmaskiner inte lämpade att använda när man vill ha många agenter med olika beteende eftersom det ofta kräver speciell kod för varje unik agent. Detta arbete undersöker möjligheterna att skapa beteendevarierande agenter, utan att behöva skriva unik kod för varje beteende genom att endast använda stokastiska tillståndsmaskiner och mallar för att styra beteenden. Resultatet visar att genom att variera sannolikheterna för övergångar i den stokastiska tillståndsmaskinen kan agenternas agerande till stor grad påverkas. Tekniken arbetet tar upp är ett bra alternativ för de som snabbt och enkelt vill få in agenter med AI i sina spel och samtidigt ha möjligheten att kunna variera deras beteenden.</p>
68

Sensorimotor embedding : a developmental approach to learning geometry

Stober, Jeremy Michael 03 September 2015 (has links)
A human infant facing the blooming, buzzing confusion of the senses grows up to be an adult with common-sense knowledge of geometry; this knowledge then allows her to describe the shapes of objects, the layouts of places, and the relative locations of things naturally and effortlessly. In robotics, such knowledge is usually built in by a human designer who needs to solve complex engineering problems of sensor calibration and inference. In contrast, this dissertation presents a model for how autonomous agents can form an understanding of geometry the same way infants do: by learning from early unstructured sensorimotor experience. Through a framework called sensorimotor embedding, an agent reconstructs knowledge of its own sensor structure, the local geometry of the world, and the pose of objects within the world. The validity of this knowledge is demonstrated directly through Procrustes analysis and indirectly by using it to solve the mountain car task with different morphologies. The dissertation demonstrates how sensorimotor embedding can serve as a robust approach for acquiring geometric knowledge. / text
69

NHS at home : co-designing a 21st century nursing bag

Swann, David January 2012 (has links)
Healthcare providers throughout the world are facing unprecedented change. In rising to social, demographic and economic pressures, the National Health Service is mobilising hospital treatments into patient’s homes (Darzi, 2006). The black nursing bag, the universal transportation tool of the district nurse has remained impervious to design change for over 100 years. The goal of the PhD is to equip newly formed neighbourhood care teams working in this emergent healthcare setting with a 21st century nursing bag. The design practice seeks to optimise the efficient delivery of patient care, standardize patient experiences in an inconsistent setting and enhances patient safety performances through design. The PhD by practice is sponsored by the Engineering Physical Sciences Research Council (EPSRC) and supported by NHS East Riding of Yorkshire (NHS ERY). The PhD is participatory and proposes a refined theoretical model to achieve its objectives: a strategy- system- experience- product continuum. Qualitative and quantitative methods have: identified variance in the nursing bags used in practice; captured the presence of MRSA inside and on bags; applied Lego Serious Play to envision aspirational products; analogous case studies determining the discrete attributes of world-class services delivered in confined spaces and luxury travel products; captured workflow using link analysis of simulated treatments; determined the efficacy of hand-cleaning techniques; evaluated of design forms using UV analysis to enhance the effectiveness hand-cleaning; heuristic evaluations informing design decisions: stakeholder presentations, international design competitions and industry opinion. Analytical, creative and experimental collaborative practices have contributed to the co-creation of a world-class nursing bag fit for the challenges of the 21st century. Validation workshops have verified that the new bag reshapes the way home healthcare is delivered, experienced and accepted: increases clinical efficiency through modularity, standardises the patient’s service experience and delivers economic benefits to the commissioners of home healthcare services.
70

Multi-agent malicious behaviour detection

Wegner, Ryan 24 October 2012 (has links)
This research presents a novel technique termed Multi-Agent Malicious Behaviour Detection. The goal of Multi-Agent Malicious Behaviour Detection is to provide infrastructure to allow for the detection and observation of malicious multi-agent systems in computer network environments. This research explores combinations of machine learning techniques and fuses them with a multi-agent approach to malicious behaviour detection that effectively blends human expertise from network defenders with modern artificial intelligence. Success of the approach depends on the Multi-Agent Malicious Behaviour Detection system's capability to adapt to evolving malicious multi-agent system communications, even as the malicious software agents in network environments vary in their degree of autonomy and intelligence. This thesis research involves the design of this framework, its implementation into a working tool, and its evaluation using network data generated by an enterprise class network appliance to simulate both a standard educational network and an educational network containing malware traffic.

Page generated in 0.0441 seconds