• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3182
  • 563
  • 267
  • 232
  • 196
  • 127
  • 102
  • 83
  • 83
  • 83
  • 83
  • 83
  • 83
  • 31
  • 29
  • Tagged with
  • 5852
  • 5852
  • 2136
  • 1604
  • 1354
  • 1205
  • 908
  • 898
  • 788
  • 774
  • 704
  • 638
  • 574
  • 571
  • 552
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
191

Limiting programs for induction in artificial intelligence

Caldon, Patrick , Computer Science & Engineering, Faculty of Engineering, UNSW January 2008 (has links)
This thesis examines a novel induction-based framework for logic programming. Limiting programs are logic programs distinguished by two features, in general they contain an infinite data stream over which induction will be performed, and in general it is not possible for a system to know when a solution for any program is correct. These facts are characteristic of some problems involving induction in artificial intelligence, and several problems in knowledge representation and logic programming have exactly these properties. This thesis presents a specification language for problems with an inductive nature, limiting programs, and a resolution based system, limiting resolution, for solving these problems. This framework has properties which guarantee that the system will converge upon a particular answer in the limit. Solutions to problems which have such an inductive property by nature can be implemented using the language, and solved with the solver. For instance, many classification problems are inductive by nature. Some generalized planning problems also have the inductive property. For a class of generalized planning problems, we show that identifying a collection of domains where a plan reaches a goal is equivalent to producing a plan. This thesis gives examples of both. Limiting resolution works by a generate-and-test strategy, creating a potential solution and iteratively looking for a contradiction with the growing stream of data provided. Limiting resolution can be implemented by modifying conventional PROLOG technology. The generateand- test strategy has some inherent inefficiencies. Two improvements have arisen from this work; the first is a tabling strategy which records previously failed attempts to produce a solution and thereby avoids redundant test steps. The second is based on the heuristic observation that for some problems the size of the test step is proportional to the closeness of the generated potential-solution to the real solution, in a suitable metric. The observation can be used to improve the performance of limiting resolution. Thus this thesis describes, from theoretical foundations to implementation, a coherent methodology for incorporating induction into existing general A.I. programming techniques, along with examples of how to perform such tasks.
192

Steps towards an empirically responsible AI : a methodological and theoretical framework

Svedberg, Peter O.S. January 2004 (has links)
<p>Initially we pursue a minimal model of a cognitive system. This in turn form the basis for the development of amethodological and theoretical framework. Two methodological requirements of the model are that explanation be from the perspective of the phenomena, and that we have structural determination. The minimal model is derived from the explanatory side of a biologically based cognitive science. Fransisco Varela is our principal source for this part. The model defines the relationship between a formally defined autonomous system and an environment, in such a way as to generate the world of the system, its actual environment. The minimal model is a modular explanation in that we find it on different levels in bio-cognitive systems, from the cell to small social groups. For the latter and for the role played by artefactual systems we bring in Edwin Hutchins' observational study of a cognitive system in action. This necessitates the introduction of a complementary form of explanation. A key aspect of Hutchins' findings is the social domain as environment for humans. Aspects of human cognitive abilities usually attributed to the person are more properly attributed to the social system, including artefactual systems.</p><p>Developing the methodological and theoretical framework means making a transition from the bio-cognitive to the computational. The two complementary forms of explanation are important for the ability to develop a methodology that supports the construction of actual systems. This has to be able to handle the transition from external determination of a system in design to internal determination (autonomy) in operation.</p><p>Once developed, the combined framework is evaluated in an application area. This is done by comparing the standard conception of the Semantic Web with how this notion looks from the perspective of the framework. This includes the development of the methodological framework as a metalevel external knowledge representation. A key difference between the two approaches is the directness by which the semantic is approached. Our perspective puts the focus on interaction and the structural regularities this engenders in the external representation. Regularities which in turn form the basis for machine processing. In this regard we see the relationship between representation and inference as analogous to the relationship between environment and system. Accordingly we have the social domain as environment for artefactual agents. For human level cognitive abilities the social domain as environment is important. We argue that a reasonable shortcut to systems we can relate to, about that very domain, is for artefactual agents to have an external representation of the social domain as environment.</p>
193

Artificial Intelligence and Robotics

Brady, Michael 01 February 1984 (has links)
Since Robotics is the field concerned with the connection of perception to action, Artificial Intelligence must have a central role in Robotics if the connection is to be intelligent. Artificial Intelligence addresses the crucial questions of: what knowledge is required in any aspect of thinking; how that knowledge should be represented; and how that knowledge should be used. Robotics challenges AI by forcing it to deal with real objects in the real world. Techniques and representations developed for purely cognitive problems, often in toy domains, do not necessarily extend to meet the challenge. Robots combine mechanical effectors, sensors, and computers. AI has made significant contributions to each component. We review AI contributions to perception and object oriented reasoning. Object-oriented reasoning includes reasoning about space, path-planning, uncertainty, and compliance. We conclude with three examples that illustrate the kinds of reasoning or problem solving abilities we would like to endow robots with and that we believe are worthy goals of both Robotics and Artificial Intelligence, being within reach of both.
194

Learning by Augmenting Rules and Accumulating Censors

Winston, Patrick H. 01 May 1982 (has links)
This paper is a synthesis of several sets of ideas: ideas about learning from precedents and exercises, ideas about learning using near misses, ideas about generalizing if-then rules, and ideas about using censors to prevent procedure misapplication. The synthesis enables two extensions to an implemented system that solves problems involving precedents and exercises and that generates if-then rules as a byproduct . These extensions are as follows: If-then rules are augmented by unless conditions, creating augmented if-then rules. An augmented if-then rule is blocked whenever facts in hand directly demonstrate the truth of an unless condition, the rule is called a censor. Like ordinary augmented if-then rules, censors can be learned. Definition rules are introduced that facilitate graceful refinement. The definition rules are also augmented if-then rules. They work by virtue of unless entries that capture certain nuances of meaning different from those expressible by necessary conditions. Like ordinary augmented if-then rules, definition rules can be learned. The strength of the ideas is illustrated by way of representative experiments. All of these experiments have been performed with an implemented system.
195

The Delta Tree: An Object-Centered Approach to Image-Based Rendering

Dally, William J., McMillan, Leonard, Bishop, Gary, Fuchs, Henry 02 May 1997 (has links)
This paper introduces the delta tree, a data structure that represents an object using a set of reference images. It also describes an algorithm for generating arbitrary re-projections of an object by traversing its delta tree. Delta trees are an efficient representation in terms of both storage and rendering performance. Each node of a delta tree stores an image taken from a point on a sampling sphere that encloses the object. Each image is compressed by discarding pixels that can be reconstructed by warping its ancestor's images to the node's viewpoint. The partial image stored at each node is divided into blocks and represented in the frequency domain. The rendering process generates an image at an arbitrary viewpoint by traversing the delta tree from a root node to one or more of its leaves. A subdivision algorithm selects only the required blocks from the nodes along the path. For each block, only the frequency components necessary to reconstruct the final image at an appropriate sampling density are used. This frequency selection mechanism handles both antialiasing and level-of-detail within a single framework. A complex scene is initially rendered by compositing images generated by traversing the delta trees of its components. Once the reference views of a scene are rendered once in this manner, the entire scene can be reprojected to an arbitrary viewpoint by traversing its own delta tree. Our approach is limited to generating views of an object from outside the object's convex hull. In practice we work around this problem by subdividing objects to render views from within the convex hull.
196

Edge and Mean Based Image Compression

Desai, Ujjaval Y., Mizuki, Marcelo M., Masaki, Ichiro, Horn, Berthold K.P. 01 November 1996 (has links)
In this paper, we present a static image compression algorithm for very low bit rate applications. The algorithm reduces spatial redundancy present in images by extracting and encoding edge and mean information. Since the human visual system is highly sensitive to edges, an edge-based compression scheme can produce intelligible images at high compression ratios. We present good quality results for facial as well as textured, 256~x~256 color images at 0.1 to 0.3 bpp. The algorithm described in this paper was designed for high performance, keeping hardware implementation issues in mind. In the next phase of the project, which is currently underway, this algorithm will be implemented in hardware, and new edge-based color image sequence compression algorithms will be developed to achieve compression ratios of over 100, i.e., less than 0.12 bpp from 12 bpp. Potential applications include low power, portable video telephones.
197

Recognizing 3D Object Using Photometric Invariant

Nagao, Kenji, Grimson, Eric 22 April 1995 (has links)
In this paper we describe a new efficient algorithm for recognizing 3D objects by combining photometric and geometric invariants. Some photometric properties are derived, that are invariant to the changes of illumination and to relative object motion with respect to the camera and/or the lighting source in 3D space. We argue that conventional color constancy algorithms can not be used in the recognition of 3D objects. Further we show recognition does not require a full constancy of colors, rather, it only needs something that remains unchanged under the varying light conditions sand poses of the objects. Combining the derived color invariants and the spatial constraints on the object surfaces, we identify corresponding positions in the model and the data space coordinates, using centroid invariance of corresponding groups of feature positions. Tests are given to show the stability and efficiency of our approach to 3D object recognition.
198

Direct Object Recognition Using No Higher Than Second or Third Order Statistics of the Image

Nagao, Kenji, Horn, Berthold 01 December 1995 (has links)
Novel algorithms for object recognition are described that directly recover the transformations relating the image to its model. Unlike methods fitting the typical conventional framework, these new methods do not require exhaustive search for each feature correspondence in order to solve for the transformation. Yet they allow simultaneous object identification and recovery of the transformation. Given hypothesized % potentially corresponding regions in the model and data (2D views) --- which are from planar surfaces of the 3D objects --- these methods allow direct compututation of the parameters of the transformation by which the data may be generated from the model. We propose two algorithms: one based on invariants derived from no higher than second and third order moments of the image, the other via a combination of the affine properties of geometrical and the differential attributes of the image. Empirical results on natural images demonstrate the effectiveness of the proposed algorithms. A sensitivity analysis of the algorithm is presented. We demonstrate in particular that the differential method is quite stable against perturbations --- although not without some error --- when compared with conventional methods. We also demonstrate mathematically that even a single point correspondence suffices, theoretically at least, to recover affine parameters via the differential method.
199

Steps towards an empirically responsible AI : a methodological and theoretical framework

Svedberg, Peter O.S. January 2004 (has links)
Initially we pursue a minimal model of a cognitive system. This in turn form the basis for the development of amethodological and theoretical framework. Two methodological requirements of the model are that explanation be from the perspective of the phenomena, and that we have structural determination. The minimal model is derived from the explanatory side of a biologically based cognitive science. Fransisco Varela is our principal source for this part. The model defines the relationship between a formally defined autonomous system and an environment, in such a way as to generate the world of the system, its actual environment. The minimal model is a modular explanation in that we find it on different levels in bio-cognitive systems, from the cell to small social groups. For the latter and for the role played by artefactual systems we bring in Edwin Hutchins' observational study of a cognitive system in action. This necessitates the introduction of a complementary form of explanation. A key aspect of Hutchins' findings is the social domain as environment for humans. Aspects of human cognitive abilities usually attributed to the person are more properly attributed to the social system, including artefactual systems. Developing the methodological and theoretical framework means making a transition from the bio-cognitive to the computational. The two complementary forms of explanation are important for the ability to develop a methodology that supports the construction of actual systems. This has to be able to handle the transition from external determination of a system in design to internal determination (autonomy) in operation. Once developed, the combined framework is evaluated in an application area. This is done by comparing the standard conception of the Semantic Web with how this notion looks from the perspective of the framework. This includes the development of the methodological framework as a metalevel external knowledge representation. A key difference between the two approaches is the directness by which the semantic is approached. Our perspective puts the focus on interaction and the structural regularities this engenders in the external representation. Regularities which in turn form the basis for machine processing. In this regard we see the relationship between representation and inference as analogous to the relationship between environment and system. Accordingly we have the social domain as environment for artefactual agents. For human level cognitive abilities the social domain as environment is important. We argue that a reasonable shortcut to systems we can relate to, about that very domain, is for artefactual agents to have an external representation of the social domain as environment.
200

Complexity of probabilistic inference in belief nets--an experimental study

Li, Zhaoyu 16 November 1990 (has links)
There are three families of exact methods used for probabilistic inference in belief nets. It is necessary to compare them and analyze the advantages and the disadvantages of each algorithm, and know the time cost of making inferences in a given belief network. This paper discusses the factors that influence the computation time of each algorithm, presents the predictive model of the time complexity for each algorithm and shows the statistical results of testing the algorithms with randomly generated belief networks. / Graduation date: 1991

Page generated in 0.1487 seconds