• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2453
  • 960
  • 272
  • 265
  • 251
  • 235
  • 76
  • 49
  • 39
  • 33
  • 30
  • 30
  • 30
  • 30
  • 30
  • Tagged with
  • 5577
  • 771
  • 544
  • 410
  • 365
  • 333
  • 321
  • 305
  • 291
  • 285
  • 284
  • 276
  • 264
  • 245
  • 240
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Ion-selective electrodes for simultaneous real-time analysis for soil macronutrients

Kim, Hak Jin, January 2006 (has links)
Thesis (Ph.D.)--University of Missouri-Columbia, 2006. / The entire dissertation/thesis text is included in the research.pdf file; the official abstract appears in the short.pdf file (which also appears in the research.pdf); a non-technical general description, or public abstract, appears in the public.pdf file. Title from title screen of research.pdf file viewed on (April 26, 2007) Vita. Includes bibliographical references.
92

Bagging E-Bayes for Estimated Breeding Value Prediction

Xu, Jiaofen 11 1900 (has links)
This work focuses on the evaluation of a bagging EB method in terms of its ability to select a subset of QTL-related markers for accurate EBV prediction. Experiments were performed on several simulated and real datasets consisting of SNP genotypes and phenotypes. The simulated datasets modeled different dominance levels and different levels of background noises. Our results show that the bagging EB method is able to detect most of the simulated QTL, even with large background noises. The average recall of QTL detection was $0.71$. When using the markers detected by the bagging EB method to predict EBVs, the prediction accuracy improved dramatically on the simulation datasets compared to using the entire set of markers. However, the prediction accuracy did not improve much when doing the same experiments on the two real datasets. The best accuracy of EBV prediction we achieved for the dairy dataset is 0.57 and the best accuracy for the beef dataset is 0.73.
93

The Generalized Cayley Map from an Algebraic Group to its Lie Algebra

michor@esi.ac.at 11 September 2001 (has links)
No description available.
94

The comparative effect of individually-generated vs. collaboratively-generated computer-based concept mapping on science concept learning

Kwon, So Young 17 September 2007 (has links)
Using a quasi-experimental design, the researcher investigated the comparative effects of individually-generated and collaboratively-generated computer-based concept mapping on middle school science concept learning. Qualitative data were analyzed to explain quantitative findings. One hundred sixty-one students (74 boys and 87 girls) in eight, seventh grade science classes at a middle school in Southeast Texas completed the entire study. Using prior science performance scores to assure equivalence of student achievement across groups, the researcher assigned the teacher’s classes to one of the three experimental groups. The independent variable, group, consisted of three levels: 40 students in a control group, 59 students trained to individually generate concept maps on computers, and 62 students trained to collaboratively generate concept maps on computers. The dependent variables were science concept learning as demonstrated by comprehension test scores, and quality of concept maps created by students in experimental groups as demonstrated by rubric scores. Students in the experimental groups received concept mapping training and used their newly acquired concept mapping skills to individually or collaboratively construct computer-based concept maps during study time. The control group, the individually-generated concept mapping group, and the collaboratively-generated concept mapping group had equivalent learning experiences for 50 minutes during five days, excepting that students in a control group worked independently without concept mapping activities, students in the individual group worked individually to construct concept maps, and students in the collaborative group worked collaboratively to construct concept maps during their study time. Both collaboratively and individually generated computer-based concept mapping had a positive effect on seventh grade middle school science concept learning but neither strategy was more effective than the other. However, the students who collaboratively generated concept maps created significantly higher quality concept maps than those who individually generated concept maps. The researcher concluded that the concept mapping software, Inspiration™, fostered construction of students’ concept maps individually or collaboratively for science learning and helped students capture their evolving creative ideas and organize them for meaningful learning. Students in both the individual and the collaborative concept mapping groups had positive attitudes toward concept mapping using Inspiration™ software.
95

Exploiting structure in man-made environments

Aydemir, Alper January 2012 (has links)
Robots are envisioned to take on jobs that are dirty, dangerous and dull, the three D's of robotics. With this mission, robotic technology today is ubiquitous on the factory floor. However, the same level of success has not occurred when it comes to robots that operate in everyday living spaces, such as homes and offices. A big part of this is attributed to domestic environments being complex and unstructured as opposed to factory settings which can be set up and precisely known in advance. In this thesis we challenge the point of view which regards man-made environments as unstructured and that robots should operate without prior assumptions about the world. Instead, we argue that robots should make use of the inherent structure of everyday living spaces across various scales and applications, in the form of contextual and prior information, and that doing so can improve the performance of robotic tasks. To investigate this premise, we start by attempting to solve a hard and realistic problem, active visual search. The particular scenario considered is that of a mobile robot tasked with finding an object on an entire unexplored building floor. We show that a search strategy which exploits the structure of indoor environments offers significant improvements on state of the art and is comparable to humans in terms of search performance. Based on the work on active visual search, we present two specific ways of making use of the structure of space. First, we propose to use the local 3D geometry as a strong indicator of objects in indoor scenes. By learning a 3D context model for various object categories, we demonstrate a method that can reliably predict the location of objects. Second, we turn our attention to predicting what lies in the unexplored part of the environment at the scale of rooms and building floors. By analyzing a large dataset, we propose that indoor environments can be thought of as being composed out of frequently occurring functional subparts. Utilizing these, we present a method that can make informed predictions about the unknown part of a given indoor environment. The ideas presented in this thesis explore various sides of the same idea: modeling and exploiting the structure inherent in indoor environments for the sake of improving robot's performance on various applications. We believe that in addition to contributing some answers, the work presented in this thesis will generate additional, fruitful questions. / <p>QC 20121105</p> / CogX
96

Implementation of a Geoserver Applicatoin For GIS Data Distribution and Manipulation

Kommana, Karteek January 2013 (has links)
Accessibility and Interactivity are keywords of information today and that is equally important in science as anywhere else. When scientists share information it benefits if it is intuitive, informative and simple and does not demand expert skills in complicated formats. This master thesis has the aim to investigate open source software tools to design a web map application that can be used by any institute or NGO to distribute their data over internet. The Java platform to be implemented is the open source OpenLayers which allow users to view and potentially manipulate GIS map data through a web map application. Whatever GIS data made available on the Geoserver (the host site for the application) can be shared to users worldwide. The user can then: add from a list of available data layers, choose background (e.g. Google Earth, Open Street Map, etc.), zoom in and out, pan, change symbols and colors, add their own data on top and start animation (if applicable). The data distributed from the Geoserver can also be viewed and accessed from smartphones whichopens the possibility to make the public part of the larger data gathering task of specific scientific inventories like observations of migrating birds, or whatever indicator a specific scientist is interested in. Data is uploaded to the Geoserver and can then be analyzed and the result is distributed to the public.
97

Improved Particle Filter Based Localization and Mapping Techniques

Milstein, Adam January 2008 (has links)
One of the most fundamental problems in mobile robotics is localization. The solution to most problems requires that the robot first determine its location in the environment. Even if the absolute position is not necessary, the robot must know where it is in relation to other objects. Virtually all activities require this preliminary knowledge. Another part of the localization problem is mapping, the robot’s position depends on its representation of the environment. An object’s position cannot be known in isolation, but must be determined in relation to the other objects. A map gives the robot’s understanding of the world around it, allowing localization to provide a position within that representation. The quality of localization thus depends directly on the quality of mapping. When a robot is moving in an unknown environment these problems must be solved simultaneously in a problem called SLAM (Simultaneous Localization and Mapping). Some of the best current techniques for localization and SLAM are based on particle filters which approximate the belief state. Monte Carlo Localization (MCL) is a solution to basic localization, while FastSLAM is used to solve the SLAM problem. Although these techniques are powerful, certain assumptions reduce their effectiveness. In particular, both techniques assume an underlying static environment, as well as certain basic sensor models. Also, MCL applies to the case where the map is entirely known while FastSLAM solves an entirely unknown map. In the case of partial knowledge, MCL cannot succeed while FastSLAM must discard the additional information. My research provides improvements to particle based localization and mapping which overcome some of the problems with these techniques, without reducing the original capabilities of the algorithms. I also extend their application to additional situations and make them more robust to several types of error. The improved solutions allow more accurate localization to be performed, so that robots can be used in additional situations.
98

Improved Particle Filter Based Localization and Mapping Techniques

Milstein, Adam January 2008 (has links)
One of the most fundamental problems in mobile robotics is localization. The solution to most problems requires that the robot first determine its location in the environment. Even if the absolute position is not necessary, the robot must know where it is in relation to other objects. Virtually all activities require this preliminary knowledge. Another part of the localization problem is mapping, the robot’s position depends on its representation of the environment. An object’s position cannot be known in isolation, but must be determined in relation to the other objects. A map gives the robot’s understanding of the world around it, allowing localization to provide a position within that representation. The quality of localization thus depends directly on the quality of mapping. When a robot is moving in an unknown environment these problems must be solved simultaneously in a problem called SLAM (Simultaneous Localization and Mapping). Some of the best current techniques for localization and SLAM are based on particle filters which approximate the belief state. Monte Carlo Localization (MCL) is a solution to basic localization, while FastSLAM is used to solve the SLAM problem. Although these techniques are powerful, certain assumptions reduce their effectiveness. In particular, both techniques assume an underlying static environment, as well as certain basic sensor models. Also, MCL applies to the case where the map is entirely known while FastSLAM solves an entirely unknown map. In the case of partial knowledge, MCL cannot succeed while FastSLAM must discard the additional information. My research provides improvements to particle based localization and mapping which overcome some of the problems with these techniques, without reducing the original capabilities of the algorithms. I also extend their application to additional situations and make them more robust to several types of error. The improved solutions allow more accurate localization to be performed, so that robots can be used in additional situations.
99

On Consistent Mapping in Distributed Environments using Mobile Sensors

Saha, Roshmik 2011 August 1900 (has links)
The problem of robotic mapping, also known as simultaneous localization and mapping (SLAM), by a mobile agent for large distributed environments is addressed in this dissertation. This has sometimes been referred to as the holy grail in the robotics community, and is the stepping stone towards making a robot completely autonomous. A hybrid solution to the SLAM problem is proposed based on "first localize then map" principle. It is provably consistent and has great potential for real time application. It provides significant improvements over state-of-the-art Bayesian approaches by reducing the computational complexity of the SLAM problem without sacrificing consistency. The localization is achieved using a feature based extended Kalman filter (EKF) which utilizes a sparse set of reliable features. The common issues of data association, loop closure and computational cost of EKF based methods are kept tractable owing to the sparsity of the feature set. A novel frequentist mapping technique is proposed for estimating the dense part of the environment using the sensor observations. Given the pose estimate of the robot, this technique can consistently map the surrounding environment. The technique has linear time complexity in map components and for the case of bounded sensor noise, it is shown that the frequentist mapping technique has constant time complexity which makes it capable of estimating large distributed environments in real time. The frequentist mapping technique is a stochastic approximation algorithm and is shown to converge to the true map probabilities almost surely. The Hybrid SLAM software is developed in the C-language and is capable of handling real experimental data as well as simulations. The Hybrid SLAM technique is shown to perform well in simulations, experiments with an iRobot Create, and on standard datasets from the Robotics Data Set Repository, known as Radish. It is demonstrated that the Hybrid SLAM technique can successfully map large complex data sets in an order of magnitude less time than the time taken by the robot to acquire the data. It has low system requirements and has the potential to run on-board a robot to estimate large distributed environments in real time.
100

An application of predictive vegetation mapping to mountain vegetation in Sweden

Green, Janet Alexis 12 April 2006 (has links)
Predictive vegetation mapping was employed to predict the distribution of vegetation communities and physiognomies in the portion of the Scandinavian mountains in Sweden. This was done to address three main research questions: (1) what environmental variables are important in structuring vegetation patterns in the study area? (2) how well does a classification tree predict the composition of mountain vegetation in the study area using the chosen environmental variables for the study? and (3) are vegetation patterns better predicted at higher levels of physiognomic aggregation? Using GIS, a spatial dataset was first developed consisting of sampled points across the full geographic range of the study area. The sample contained existing vegetation community data as the dependent variable and various environmental data as the independent variables thought to control or correlate with vegetation distributions. The environmental data were either obtained from existing digital datasets or derived from Digital Elevation Models (DEMs). Utilizing classification tree methodology, three model frameworks were developed in which vegetation was increasingly aggregated into higher levels of physiognomic organization. The models were then pruned, and accuracy statistics were obtained. Results indicated that accuracy improved with increasing aggregation of the dependent variable. The three model frameworks were then applied to the Abisko portion of the study area in northwestern Sweden to produce predictive maps which were compared to the current vegetation distribution. Compositional patterns were critically analyzed in order to: (1) assess the ability of the models to correctly classify general vegetation patterns at the three levels of physiognomic classification, (2) address the extent to which three specific ecological relationships thought to control vegetation distribution in this area were manifested by the model, and (3) speculate as to possible sources of error and factors affecting accuracy of the models.

Page generated in 0.0661 seconds