1 |
The Rooftop Raven Project: An Exploratory, Qualitative Study of Puzzle Solving Ability in Wild and Captive RavensCory, Emily Faun January 2016 (has links)
The family Corivdae, which includes crows and ravens, contains arguably some of the most intelligent species the animal kingdom has to offer. Separated from primates by at least 252 million years of evolution, birds bear striking physiological differences from mammals, while displaying similar intellectual abilities. This apparent convergent evolution of intelligence sheds light on what could possibly be a universal phenomenon. While many excellent studies show the abilities of corvids, the majority of them test only captive subjects. This study tested the capabilities of both captive and wild ravens, from three different species. The first portion of the study tested which of the four solutions offered wild ravens would choose when solving a Multi-Access Box. The second portion of the study tested the performance of wild and captive ravens when solving a Multi-Latch Box. The nine raven subjects were split into four different levels of enculturation based on their known histories. Two wild common ravens (Corvus corax) on the campus of the University of Arizona were level 1, four wild common ravens in the parking lot of a United States Forest Service parking lot were level 2, two captive and trained Chihuahuan ravens (Corvus cryptoleucus) from the Raptor Free Flight program at the Arizona-Sonora Desert Museum comprised level 3, and one captive and trained white-necked raven (Corvus albicollis) made level 4. It is possible to run trials with completely wild and free birds. It was found that ravens prefer direct methods of obtaining food, such as opening doors and pulling strings, instead of tool use. It was also found that while the relationship between enculturation level and success solving a puzzle was not linear, captive birds were the best solvers. The data given here suggest that captivity, training and enrichment history, and enculturation should all be considered when performing cognitive studies with animals.
|
2 |
Computer-assisted fracture reduction in an orthopaedic pre-operative planning workflowMangs, Ludvig January 2017 (has links)
This report presents three implementations for solving 3D puzzles of fractured bones: two semi-automatic ones and one which is automatic. These are compared using qualitative as well as quantitative tests to find out if less interaction can yield equal or better results. Qualitative tests are performed on real clinical data from CT-scans. A model created in Blender is used for quantitative tests. Test results have shown that each implementation has its own strengths and weaknesses which can make them usable for different types of fractures. It may be possible to combine automatic solutions and manual ones to increase the number of solvable cases. The conclusion is that it is possible to reduce fractures with less user interaction and still get equal or better results, but it depends on the fracture case as well as the user.
|
3 |
Solving Tetris-like Puzzles with Informed Search and Machine LearningNilsson, Anneli January 2021 (has links)
Assembling different kinds of items, everything from furniture to hobby models, takes a certain process to complete and this process can vary in complexity. An interesting aspect of this process is what components are available during assembly. The optimal scenario would be to have all required components available but sometimes that might not be the case. For a computer, this problem can be difficult to solve and requires specific environments to complete an assembly task. In this thesis work, block puzzles with various blueprints were used to complete assemblies with two different lists of components; one whole set of correct components and one with mixed that may or may not work for a blueprint. Three different methods were used to conduct the assemblies, one random based method, one that used the informed search method iterative deepening A* and one reinforcement learning method that used dueling deep Q-networks. The assembly time and accuracy between a completed configuration and the blueprint were measured for each method, where the informed search performed best in terms of accuracy but had a long assembly time. The reinforcement learning method did not perform well in terms of accu-racy between blueprint and configuration, but had fast assembly time, and in its current state would not be suitable to use to solve the given problem. / <p>Examensarbetet är utfört vid Institutionen för teknik och naturvetenskap (ITN) vid Tekniska fakulteten, Linköpings universitet</p>
|
Page generated in 0.0732 seconds