Spelling suggestions: "subject:"agricultural robotics"" "subject:"gricultural robotics""
1 |
[pt] DETECÇÃO VISUAL DE FILEIRA DE PLANTAÇÃO COM TAREFA AUXILIAR DE SEGMENTAÇÃO PARA NAVEGAÇÃO DE ROBÔS MÓVEIS / [en] VISUAL CROP ROW DETECTION WITH AUXILIARY SEGMENTATION TASK FOR MOBILE ROBOT NAVIGATIONIGOR FERREIRA DA COSTA 07 November 2023 (has links)
[pt] Com a evolução da agricultura inteligente, robôs autônomos agrícolas
têm sido pesquisados de forma extensiva nos últimos anos, ao passo que
podem resultar em uma grande melhoria na eficiência do campo. No entanto,
navegar em um campo de cultivo aberto ainda é um grande desafio. O RTKGNSS é uma excelente ferramenta para rastrear a posição do robô, mas
precisa de mapeamento e planejamento precisos, além de ser caro e dependente
de qualidade do sinal. Como tal, sistemas on-board que podem detectar o
campo diretamente para guiar o robô são uma boa alternativa. Esses sistemas
detectam as linhas com técnicas de processamento de imagem e estimam a
posição aplicando algoritmos à máscara obtida, como a transformada de Hough
ou regressão linear. Neste trabalho, uma abordagem direta é apresentada
treinando um modelo de rede neural para obter a posição das linhas de
corte diretamente de uma imagem RGB. Enquanto a câmera nesses sistemas
está, geralmente, voltada para o campo, uma câmera próxima ao solo é
proposta para aproveitar túneis ou paredes de plantas formadas entre as
fileiras. Um ambiente de simulação para avaliar o desempenho do modelo e
o posicionamento da câmera foi desenvolvido e disponibilizado no Github.
Também são propostos quatro conjuntos de dados para treinar os modelos,
sendo dois para as simulações e dois para os testes do mundo real. Os resultados
da simulação são mostrados em diferentes resoluções e estágios de crescimento
da planta, indicando as capacidades e limitações do sistema e algumas das
melhores configurações são verificadas em dois tipos de ambientes agrícolas. / [en] Autonomous robots for agricultural tasks have been researched to great
extent in the past years as they could result in a great improvement of
field efficiency. Navigating an open crop field still is a great challenge. RTKGNSS is a excellent tool to track the robot’s position, but it needs precise
mapping and planning while also being expensive and signal dependent. As
such, onboard systems that can sense the field directly to guide the robot
are a good alternative. Those systems detect the rows with adequate image
processing techniques and estimate the position by applying algorithms to the
obtained mask, such as the Hough transform or linear regression. In this work,
a direct approach is presented by training a neural network model to obtain the
position of crop lines directly from an RGB image. While, usually, the camera
in these kinds of systems is looking down to the field, a camera near the ground
is proposed to take advantage of tunnels or walls of plants formed between
rows. A simulation environment for evaluating both the model’s performance
and camera placement was developed and made available on Github, also
four datasets to train the models are proposed, being two for the simulations
and two for the real world tests. The results from the simulation are shown
across different resolutions and stages of plant growth, indicating the system’s
capabilities and limitations. Some of the best configurations are then verified
in two types of agricultural environments.
|
2 |
An Acoustic Indoor Localization System for Unmanned Robots with Temperature Compensation and Co-channel Interference Tolerance / 温度補償および同一チャネル干渉耐性を備えた無人ロボットのための屋内音響測位システムTsay, Lok Wai Jacky 26 September 2022 (has links)
京都大学 / 新制・課程博士 / 博士(農学) / 甲第24246号 / 農博第2525号 / 新制||農||1094(附属図書館) / 学位論文||R4||N5417(農学部図書室) / 京都大学大学院農学研究科地域環境科学専攻 / (主査)教授 近藤 直, 教授 飯田 訓久, 准教授 小川 雄一 / 学位規則第4条第1項該当 / Doctor of Agricultural Science / Kyoto University / DFAM
|
3 |
A HUB-CI MODEL FOR NETWORKED TELEROBOTICS IN COLLABORATIVE MONITORING OF AGRICULTURAL GREENHOUSESAshwin Sasidharan Nair (6589922) 15 May 2019 (has links)
Networked telerobots are operated by humans through remote interactions and have found applications in unstructured environments, such as outer space, underwater, telesurgery, manufacturing etc. In precision agricultural robotics, target monitoring, recognition and detection is a complex task, requiring expertise, hence more efficiently performed by collaborative human-robot systems. A HUB is an online portal, a platform to create and share scientific and advanced computing tools. HUB-CI is a similar tool developed by PRISM center at Purdue University to enable cyber-augmented collaborative interactions over cyber-supported complex systems. Unlike previous HUBs, HUB-CI enables both physical and virtual collaboration between several groups of human users along with relevant cyber-physical agents. This research, sponsored in part by the Binational Agricultural Research and Development Fund (BARD), implements the HUB-CI model to improve the Collaborative Intelligence (CI) of an agricultural telerobotic system for early detection of anomalies in pepper plants grown in greenhouses. Specific CI tools developed for this purpose include: (1) Spectral image segmentation for detecting and mapping to anomalies in growing pepper plants; (2) Workflow/task administration protocols for managing/coordinating interactions between software, hardware, and human agents, engaged in the monitoring and detection, which would reliably lead to precise, responsive mitigation. These CI tools aim to minimize interactions’ conflicts and errors that may impede detection effectiveness, thus reducing crops quality. Simulated experiments performed show that planned and optimized collaborative interactions with HUB-CI (as opposed to ad-hoc interactions) yield significantly fewer errors and better detection by improving the system efficiency by between 210% to 255%. The anomaly detection method was tested on the spectral image data available in terms of number of anomalous pixels for healthy plants, and plants with stresses providing statistically significant results between the different classifications of plant health using ANOVA tests (P-value = 0). Hence, it improves system productivity by leveraging collaboration and learning based tools for precise monitoring for healthy growth of pepper plants in greenhouses.
|
4 |
Human-in-the-loop of Cyber Physical Agricultural Robotic SystemsMaitreya Sreeram (9706730) 15 December 2020 (has links)
The onset of Industry 4.0 has provided considerable benefits to Intelligent Cyber-Physical Systems (ICPS), with technologies such as internet of things, wireless sensing, cognitive computing and artificial intelligence to improve automation and control. However, with increasing automation, the “human” element in industrial systems is often overlooked for the sake of standardization. While automation aims to redirect the workload of human to standardized and programmable entities, humans possess qualities such as cognitive awareness, perception and intuition which cannot be automated (or programmatically replicated) but can provide automated systems with much needed robustness and sustainability, especially in unstructured and dynamic environments. Incorporating tangible human skills and knowledge within industrial environments is an essential function of “Human-in-the-loop” (HITL) Systems, a term for systems powerfully augmented by different qualities of human agents. The primary challenge, however, lies in the realistic modelling and application of these qualities; an accurate human model must be developed, integrated and tested within different cyber-physical workflows to 1) validate the assumed advantages, investments and 2) ensure optimized collaboration between entities. Agricultural Robotic Systems (ARS) are an example of such cyber-physical systems (CPS) which, in order to reduce reliance on traditional human-intensive approaches, leverage sensor networks, autonomous robotics and vision systems and for the early detection of diseases in greenhouse plants. Complete elimination of humans from such environments can prove sub-optimal given that greenhouses present a host of dynamic conditions and interactions which cannot be explicitly defined or managed automatically. Supported by efficient algorithms for sampling, routing and search, HITL augmentation into ARS can provide improved detection capabilities, system performance and stability, while also reducing the workload of humans as compared to traditional methods. This research thus studies the modelling and integration of humans into the loop of ARS, using simulation techniques and employing intelligent protocols for optimized interactions. Human qualities are modelled in human “classes” within an event-based, discrete time simulation developed in Python. A logic controller based on collaborative intelligence (HUB-CI) efficiently dictates workflow logic, owing to the multi-agent and multi-algorithm nature of the system. Two integration hierarchies are simulated to study different types of integrations of HITL: Sequential, and Shared Integration. System performance metrics such as costs, number of tasks and classification accuracy are measured and compared for different collaboration protocols within each hierarchy, to verify the impact of chosen sampling and search algorithms. The experiments performed show the statistically significant advantages of HUB-CI based protocol over traditional protocols in terms of collaborative task performance and disease detectability, thus justifying added investment due to the inclusion of HITL. The results also discuss the competitive factors between both integrations, laying out the relative advantages and disadvantages and the scope for further research. Improving human modelling and expanding the range of human activities within the loop can help to improve the practicality and accuracy of the simulation in replicating an HITL-ARS. Finally, the research also discusses the development of a user-interface software based on ARS methodologies to test the system in the real-world.<br>
|
5 |
[pt] MODELAGEM E CONTROLE CINEMÁTICO DE UM ROBÔ MÓVEL PARA NAVEGAÇÃO AUTÔNOMA EM CAMPOS AGRÍCOLAS / [en] MODELING AND KINEMATIC CONTROL OF A MOBILE ROBOT FOR AUTONOMOUS NAVIGATION IN AGRICULTURAL FIELDSADALBERTO IGOR DE SOUZA OLIVEIRA 25 February 2021 (has links)
[pt] Nos últimos anos, os robôs móveis têm emergido como uma solução alternativa
para o aumento do nível de automação e mecanização na agricultura.
Neste contexto, o foco da agricultura de precisão é a otimização do uso de
insumos, redução de perdas nas lavouras, redução do desperdício de água
e melhorar a produtividade em áreas cada vez menores, tornando a produção
mais eficiente e sustentável. Os robôs agrícolas, ou AgBots podem
ser controlados remotamente ou atuar de forma autônoma, utilizando diferentes
sistemas de locomoção, bem como serem equipados com atuadores
e sensores que lhes permitem realizar diversas tarefas agrícolas, tais como
plantio, colheita, poda, fenotipagem, monitoramento e coleta de dado, entre
outros. Neste trabalho será realizado um estudo em robôs móveis com
rodas direcionado para os modelos de tração diferencial e no modelo similar
ao carro (com atuação do sistema de direção) e suas aplicações em
navegação autônoma em ambientes agrícolas. A modelagem e o projeto de
controle baseiam-se em técnica clássicas e avançadas, utilizando abordagens
de controle robusto por modo deslisante, tanto de primeira como de segunda
ordens (Super Twisting Algorithm) para lidar com incertezas e interferências
externas, comumente encontradas no tipo de ambiente agrícola a que se
destina. Teste de verificação e validação são realizados através de simulações
numéricas (MATLAB) e em ambiente de virtualização 3D (Gazebo). Testes
experimentais preliminares são incluídos para ilustrarem as possibilidades
de aplicação das metodologias de controle propostas em um ambiente real.
Conclusões a respeito do trabalho são apresentadas, desenvolvendo uma discussão
sobre os seus pontos mais relevantes, bem como sobre as perspectivas
de melhorias futuras e pontos que ainda podem ser melhor pesquisados. / [en] In the last years, mobile robots have emerged as an alternative solution
for increasing the levels of automation and mechanization in agricultural
fields. In this context, the key idea of precision agriculture is to optimize
the use of production inputs, crop losses, waste of water and to increase
the crop production in small areas, in an efficient and sustainable manner.
Agricultural robots or AgBots may be autonomous or remotely controlled,
being endowed with different types of locomotion apparatus, actuation and
sensory systems, as well as specialized tools which enable them to carry out
a number of agricultural tasks such as, seeding, pruning, harvesting, phenotyping,
monitoring and data collection. In this work, we perform a study
on two type of wheeled mobile robots (i.e., differential-drive and car-like)
and their application for autonomous navigation in agricultural fields. The
modeling and control design is based on classical and advanced approaches,
using robust control approaches such as Sliding Mode Control (first order)
and Super Twisting Algorithm (second order) to deal with parametric uncertainties
and external disturbances, commonly founded in agricultural fields.
Verification and validation are carried out by means of numerical simulations
in MATLAB and 3D computer simulations in Gazebo. Preliminary
experimental tests are included to illustrate the performance and feasibility
of the proposed modeling and control methodologies. Concluding remarks
and perspectives are presented to summarize the strengths and weaknesses
of the proposed solution and to suggest the scope for future improvements.
|
6 |
[en] A ROBUST VISUAL SERVOING APPROACH FOR ROBOTIC FRUIT HARVESTING / [pt] UMA ABORDAGEM DE SERVOVISÃO ROBUSTA PARA COLHEITA ROBÓTICA DE FRUTASJUAN DAVID GAMBA CAMACHO 05 February 2019 (has links)
[pt] Neste trabalho, apresenta-se diferentes esquemas de controle servovisuais para tarefas robóticas de colheita de fruta, na presença de incertezas paramétricas nos modelos do sistema. O primeiro esquema combina as abordagens de servovisão baseada em posição (PBVS) e servovisão baseada em imagem (IBVS) para realizar, respectivamente, a aproximação até a fruta e, em seguida, um ajuste fino para a colheita. O segundo esquema usa uma abordagem de servovisão híbrida (HVS) para realizar a tarefa de colheita completa, projetando uma lei de controle adequada que combina vetores de erro definidos no espaço operacional e no espaço da imagem. A fase de detecção utiliza um algoritmo baseado no espaço de cores OTHA e limiar da imagem Otsu para um rápido reconhecimento de frutos maduros em cenários
complexos. Além disso, um método de detecção mais preciso emprega uma Rede Neural Convolucional Profunda (DCNN) pré-treinada baseada em uma versão Segnet minimizada para uma inferência rápida durante a execução da tarefa. A localização do objeto é realizada empregando uma técnica de triangulação de imagem, que combina os algoritmos SURF e RANSAC ou ORB e BF-Matcher para extrair a característica da imagem da fruta e associa-lo com o seu ponto correspondente na outra visualização.
No entanto, como esses algoritmos exigem um elevado custo computacional para os requisitos da tarefa, um método de estimativa mais rápido utiliza o centróide da fruta e transformação homogênea para descobrir os pontos correspondentes. Finalmente, um esquema de controle em modos deslizantes
(SMC) baseado em visão e uma função de monitoramento de comutação são empregados para lidar com incertezas nos parâmetros de calibração do sistema de câmera-robô. Nesse sentido, é possível garantir a estabilidade assintótica e a convergência do erro da característica da imagem, mesmo que o ângulo de desalinhamento, em torno do eixo z, entre os sistemas de coordenadas da câmera e do efetuador seja incerto. / [en] In this work, we present different eye-in-hand visual servoing control schemes applied to a robotic harvesting task of soft fruits in the presence of parametric uncertainties in the system models. The first scheme combines position-based visual servoing (PBVS) and image-based visual servoing (IBVS) approaches in order to perform respectively an approach phase to the fruit and then a fine tuning of the end-effector to harvest. The second scheme uses a hybrid visual servoing (HVS) approach to fulfill the complete harvesting task, by designing a suitable control law which combines error vectors defined in both the image and operational spaces. For detecting the fruits, an algorithm based on the combination of the OHTA color space and Otsu’s threshold method for a fast recognition of mature fruits in complex
scenarios. In addition, a more accurate detection method employs a pre-trained deep encoder-decoder algorithm based on a minimized Segnet version for a fast and cheap inference during the task execution. The object localization is accomplished by employing an image triangulation technique, which combines the speeded-up-robust-features (SURF) and the-randomsample-consensus (RANSAC) or the Oriented FAST and Rotated BRIEF and the Brute-Force Matcher (BF-Matcher) algorithms to extract the fruit
image feature and match it to its correspondent feature-point into the other view of the stereo camera. However, since these algorithms are computationally expensive for the task requirements, a faster estimation method uses the fruit centroid and a homogeneous transformation for discovering
matching points. Finally, a vision-based sliding-mode-control scheme and a switching monitoring function are employed to cope with uncertainties in the calibration parameters of the camera-robot system. In this context, it is possible to guarantee the asymptotic stability and convergence of the image feature error, even if the misalignment angle, around the z-axis, between the camera and end-effector frames is uncertain. 3D computer simulations and preliminary experimental results, obtained with a Mitsubishi robot arm RV-2AJ carrying out a simple strawberry picking task, are included to illustrate the performance and effectiveness of the proposed control scheme.
|
Page generated in 0.0802 seconds