251 |
Risk-Aware Planning by Extracting Uncertainty from Deep Learning-Based PerceptionToubeh, Maymoonah I. 07 December 2018 (has links)
The integration of deep learning models and classical techniques in robotics is constantly creating solutions to problems once thought out of reach. The issues arising in most models that work involve the gap between experimentation and reality, with a need for strategies that assess the risk involved with different models when applied in real-world and safety-critical situations. This work proposes the use of Bayesian approximations of uncertainty from deep learning in a robot planner, showing that this produces more cautious actions in safety-critical scenarios. The case study investigated is motivated by a setup where an aerial robot acts as a "scout'' for a ground robot when the below area is unknown or dangerous, with applications in space exploration, military, or search-and-rescue. Images taken from the aerial view are used to provide a less obstructed map to guide the navigation of the robot on the ground. Experiments are conducted using a deep learning semantic image segmentation, followed by a path planner based on the resulting cost map, to provide an empirical analysis of the proposed method. The method is analyzed to assess the impact of variations in the uncertainty extraction, as well as the absence of an uncertainty metric, on the overall system with the use of a defined factor which measures surprise to the planner. The analysis is performed on multiple datasets, showing a similar trend of lower surprise when uncertainty information is incorporated in the planning, given threshold values of the hyperparameters in the uncertainty extraction have been met. / Master of Science / Deep learning (DL) is the phrase used to refer to the use of large hierarchical structures, often called neural networks, to approximate semantic information from data input of various forms. DL has shown superior performance at many tasks, such as several forms of image understanding, often referred to as computer vision problems. Deep learning techniques are trained using large amounts of data to map input data to output interpretation. The method should then perform correct input-output mappings on new data, different from the data it was trained on.
Robots often carry various sensors from which it is possible to make interpretations about the environment. Inputs from a sensor can be high dimensional, such as pixels given by a camera, and processing these inputs can be quite tedious and inefficient given a human interpreter. Deep learning has recently been adopted by roboticists as a means of automatically interpreting and representing sensor inputs, like images. The issue that arises with the traditional use of deep learning is twofold: it forces an interpretation of the inputs even when an interpretation is not applicable, and it does not provide a measure of certainty with its outputs. Many techniques have been developed to address this issue with deep learning. These techniques aim to produce a measure of uncertainty associated with DL outputs, such that even when an incorrect or inapplicable output is produced, it is accompanied with a high level of uncertainty.
To explore the efficacy and applicability of these uncertainty extraction techniques, this thesis looks at their use as applied to part of a robot planning system. Specifically, the input to the robot planner is an overhead image taken by an unmanned aerial vehicle (UAV) and the output is a path from a set start and goal position to be taken by an unmanned ground vehicle (UGV) below. The image is passed through a deep learning portion of the system that performs what is called semantic segmentation, mapping each pixel to a meaningful class, on the image. Based on the segmentation, each pixel is given a cost proportionate to the perceived level of safety associated with that class. A cost map is thus formed on the entire image, from which traditional robotics techniques are used to plan a path from start to goal.
A comparison is performed between the risk-neutral case which uses the conventional DL method and the risk-aware case which uses uncertainty information accompanying the modified DL technique. The overall effects on the robot system are envisioned by observing a metric called the surprise factor, where a high surprise factor signifies a poor prediction of the actual cost associated with a path. The risk-neutral case is shown to have a higher surprise factor than the proposed risk-aware setup, both on average and in safety-critical case studies.
|
252 |
Reinforced concrete two-span continuous deep beamsAshour, Ashraf, Morley, C.T., Subedi, N.K. January 2002 (has links)
Yes
|
253 |
Deep learning based diatom-inspired metamaterial designShih, Ting-An 16 January 2023 (has links)
Diatom algae, abundantly found in the ocean, has hierarchical micro- and nanopores which inspired lots of metamaterial designs including dielectric metasurfaces. The conventional approach taken in the metamaterial design process is to generate the corresponding optical spectrum by utilizing physics-based simulation software. Although this approach provides high accuracy, the downside is that it is time-consuming and there are also constraints. By setting design parameters and the structure of the material, the optical response could be easily achieved. However, this approach is not able to deal with the inverse problem as simple as in the forward problem. In this study, a deep learning model that is capable of solving both the forward and the inverse problem of a diatom-inspired metamaterial design was developed and it was further verified experimentally. This method serves as an alternative way for the traditional metamaterial design process which greatly saves time and also presents functionality that simulation does not provide. To investigate the feasibility of this method, different input training datasets were examined, and several strategies were taken to improve the model performance. Though with the success in some cases, effort is still needed to employ the technique in a broader aspect. / 2024-01-15T00:00:00Z
|
254 |
Neural Network Emulation for Computer Model with High Dimensional Outputs using Feature Engineering and Data AugmentationAlamari, Mohammed Barakat January 2022 (has links)
No description available.
|
255 |
Convolutional neural networks using cardiac magnetic resonance for early diagnosis and risk stratification of cardiac amyloidosisCockrum, Joshua W. January 2022 (has links)
No description available.
|
256 |
Reinforcement Learning for Hydrobatic AUVs / Reinforcement learning för Hydrobatiska AUVWoźniak, Grzegorz January 2022 (has links)
This master thesis focuses on developing a Reinforcement Learning (RL) controller to perform hydrobatic maneuvers on an Autonomous Underwater Vehicle (AUV) successfully. This work also aims to analyze the robustness of the RL controller, as well as provide a comparison between RL algorithms and Proportional Integral Derivative (PID) control. Training of the algorithms is initially conducted in a Numpy simulation in Python. We show how to model the Equations of Motion (EOM) of the AUV and how to use it to train the RL controllers. We use the stablebaselines3 RL framework and create a training environment with the OpenAI gym. The Twin-Delay Deep Deterministic Policy Gradient (TD3) algorithm offers good performance in the simulation. The following maneuvers are studied: trim control, waypoint following, and an inverted pendulum. We test the maneuvers both in the Numpy simulation and Stonefish simulator. Also, we test the robustness of the RL trim controller by simulating noise in the state feedback. Lastly, we run the RL trim controller on a real AUV hardware called SAM. We show that the RL algorithm trained in the Numpy simulator can achieve similar performance to the PID controller in the Stonefish simulator. We generate a policy that can perform the trim control and the Inverted Pendulum maneuver in the Numpy simulation. We show that we can generate a robust policy that executes other types of maneuvers by providing a parameterized cost function to the RL algorithm. We discuss the results of every maneuver we perform with the SAM AUV and provide a discussion about the advantages and disadvantages of this control method applied to underwater robotics. We conclude that RL can be used to create policies that perform hydrobatic maneuvers. This data-driven approach can be applied in the future to more complex problems in underwater robotics. / Denna masteruppsats fokuserar på att utveckla en Reinforcement Learning (RL) kontroller för att framgångsrikt utföra hydrobatiska manövrar på ett autonomt undervattensfordon (AUV). Detta arbete syftar också till att analysera robustheten hos RL-kontrollern, samt tillhandahålla en jämförelse mellan RL-algoritmer och Proportional Integral Derivative (PID) kontroll. Träning av algoritmerna utförs initialt i Numpy-simuleringen i Python. Vi visar hur man modellerar rörelseekvationerna (EOM) för AUV, och hur man använder den för att träna RL-kontrollerna. Vi använder ramverket stablebaselines3 RL och skapar en träningsmiljö med gymmet OpenAI. Algoritmen Twin-Delay Deep Deterministic Policy Gradient (TD3) erbjuder bra prestanda i simuleringen. Följande manövrar studeras: trimkontroll, waypointföljning och en inverterad pendel. Vi testar manövrarna både i Numpy-simulering och Stonefish-simulator. Vi testar också robustheten hos RL-trimkontrollern genom att simulera bruset i tillståndsåterkopplingen. Slutligen kör vi RL-trimkontrollern på den riktiga SAM AUV-hårdvaran. Vi visar att RL-algoritmen tränad i Numpy-simulatorn kan uppnå liknande prestanda som PID-regulatorn i Stonefish-simulatorn. Vi genererar en policy som kan utföra trimkontrollen och manövern med inverterad pendel i Numpy-simuleringen. Vi visar att vi kan generera en robust policy som utför andra typer av manövrar genom att tillhandahålla en parameteriserad kostnadsfunktion till RL-algoritmen. Vi diskuterar resultaten av varje manöver vi utför med SAM AUV och ger en diskussion om fördelarna och nackdelarna med denna kontrollmetod som tillämpas på undervattensrobotik. Vi drar slutsatsen att RL kan användas för att skapa policyer som utför hydrobatiska manövrar. Detta datadrivna tillvägagångssätt kan tillämpas i framtiden på mer komplexa problem inom undervattensrobotik.
|
257 |
Identifying streamflow changes in western North America from 1979 to 2021 using Deep Learning approachesTang, Weigang 11 1900 (has links)
Streamflow in Western North America (WNA) has been experiencing pronounced changes in terms of volume and timing over the past century, primarily driven by natural climate variability and human-induced climate changes. This thesis advances on previous work by revealing the most recent streamflow changes in WNA using a comprehensive suite of classical hydrometric methods along with novel Deep Learning (DL) based approaches for change detection and classifica- tion. More than 500 natural streams were included in the analysis across western Canada and the United States. Trend analyses based on the Mann-Kendall test were conducted on a wide selection of classic hydrometric indicators to represent varying aspects of streamflow over 43 years from 1979 to 2021. A general geograph- ical divide at approximately 46◦N degrees latitude indicates that total streamflow is increasing to the north while declining to the south. Declining late summer flows (July–September) were also widespread across the WNA domain, coinciding with an overall reduction in precipitation. Some changing patterns are regional specific, including: 1) increased winter low flows at high latitudes; 2) earlier spring freshet in Rocky Mountains; 3) increased autumns flows in coastal Pacific North- west; and 4) dramatic drying in southwestern United States. In addition to classic hydrometrics, trend analysis was performed on Latent Features (LFs), which were extracted by Variation AutoEncoder (VAE) from raw streamflow data and are considered “machine-learned hydrometrics”. Some LFs with direct hydrological implications were closely associated with the classical hydrometric indicators such as flow quantity, seasonal distribution, timing and magnitude of freshet, and snow- to-rain transition. The changing patterns of streamflows revealed by LFs show direct agreement with the hydrometric trends. By reconstructing hydrographs from select LFs, VAE also provides a mechanism to project changes in streamflow patterns in the future. Furthermore, a parametric t-SNE method based on DL technology was developed to visualize similarity among a large number of hydro- graphs on a 2-D map. This novel method allowed fast grouping of hydrologically similar rivers based on their flow regime type and provides new opportunities for streamflow classification and regionalization. / Thesis / Doctor of Philosophy (PhD)
|
258 |
Yield Prediction Using Spatial and Temporal Deep Learning Algorithms and Data FusionBisht, Bhavesh 24 November 2023 (has links)
The world’s population is expected to grow to 9.6 billion by 2050. This exponential growth imposes a significant challenge on food security making the development of efficient crop production a growing concern. The traditional methods of analyzing soil and crop yield rely on manual field surveys and the use of expensive instruments. This process is not only time-consuming but also requires a team of specialists making this method of prediction expensive. Prediction of yield is an integral part of smart farming as it enables farmers to make timely informed decisions and maximize productivity while minimizing waste. Traditional statistical approaches fall short in optimizing yield prediction due to the multitude of diverse variables that influence crop production. Additionally, the interactions between these variables are non-linear which these methods fail to capture. Recent approaches in machine learning and data-driven models are better suited for handling the complexity and variability of crop yield prediction.
Maize, also known as corn, is a staple crop in many countries and is used in a variety of food products, including bread, cereal, and animal feed. In 2021-2022, the total production of corn was around 1.2 billion tonnes superseding that of wheat or rice, making it an essential element of food production. With the advent of remote sensing, Unmanned aerial vehicles or UAVs are widely used to capture high-quality field images making it possible to capture minute details for better analysis of the crops. By combining spatial features, such as topography and soil type, with crop growth information, it is possible to develop a robust and accurate system for predicting crop yield. Convolutional Neural Networks (CNNs) are a type of deep neural network that has shown remarkable success in computer vision tasks, achieving state-of-the-art performance. Their ability to automatically extract features and patterns from data sets makes them highly effective in analyzing complex and high-dimensional datasets, such as drone imagery. In this research, we aim to build an effective crop yield predictor using data fusion and deep learning. We propose several Deep CNN architectures that can accurately predict corn yield before the end of the harvesting season which can aid farmers by providing them with valuable information about potential harvest outcomes, enabling them to make informed decisions regarding resource allocation. UAVs equipped with RGB (Red Green Blue) and multi-spectral cameras were scheduled to capture high-resolution images for the entire growth period of 2021 of 3 fields located in Ottawa, Ontario, where primarily corn was grown. Whereas, the ground yield data was acquired at the time of harvesting using a yield monitoring device mounted on the harvester. Several data processing techniques were employed to remove erroneous measurements and the processed data was fed to different CNN architectures, and several analyses were done on the models to highlight the best techniques/methods that lead to the most optimal performance. The final best-performing model was a 3-dimensional CNN model that can predict yield utilizing the images from the Early(June) and Mid(July) growing stages with a Mean Absolute Percentage error of 15.18% and a Root Mean Squared Error of 17.63 (Bushels Per Acre). The model trained on data from Field 1 demonstrated an average Correlation Coefficient of 0.57 between the True and Predicted yield values from Field 2 and Field 3. This research provides a direction for developing an end-to-end yield prediction model. Additionally, by leveraging the results from the experiments presented in this research, image acquisition, and computation costs can be brought down.
|
259 |
Algebraic Learning: Towards Interpretable Information ModelingYang, Tong January 2021 (has links)
Thesis advisor: Jan Engelbrecht / Along with the proliferation of digital data collected using sensor technologies and a boost of computing power, Deep Learning (DL) based approaches have drawn enormous attention in the past decade due to their impressive performance in extracting complex relations from raw data and representing valuable information. At the same time, though, rooted in its notorious black-box nature, the appreciation of DL has been highly debated due to the lack of interpretability. On the one hand, DL only utilizes statistical features contained in raw data while ignoring human knowledge of the underlying system, which results in both data inefficiency and trust issues; on the other hand, a trained DL model does not provide to researchers any extra insight about the underlying system beyond its output, which, however, is the essence of most fields of science, e.g. physics and economics. The interpretability issue, in fact, has been naturally addressed in physics research. Conventional physics theories develop models of matter to describe experimentally observed phenomena. Tasks in DL, instead, can be considered as developing models of information to match with collected datasets. Motivated by techniques and perspectives in conventional physics, this thesis addresses the issue of interpretability in general information modeling. This thesis endeavors to address the two drawbacks of DL approaches mentioned above. Firstly, instead of relying on an intuition-driven construction of model structures, a problem-oriented perspective is applied to incorporate knowledge into modeling practice, where interesting mathematical properties emerge naturally which cast constraints on modeling. Secondly, given a trained model, various methods could be applied to extract further insights about the underlying system, which is achieved either based on a simplified function approximation of the complex neural network model, or through analyzing the model itself as an effective representation of the system. These two pathways are termed as guided model design (GuiMoD) and secondary measurements, respectively, which, together, present a comprehensive framework to investigate the general field of interpretability in modern Deep Learning practice. Remarkably, during the study of GuiMoD, a novel scheme emerges for the modeling practice in statistical learning: Algebraic Learning (AgLr). Instead of being restricted to the discussion of any specific model structure or dataset, AgLr starts from idiosyncrasies of a learning task itself and studies the structure of a legitimate model class in general. This novel modeling scheme demonstrates the noteworthy value of abstract algebra for general artificial intelligence, which has been overlooked in recent progress, and could shed further light on interpretable information modeling by offering practical insights from a formal yet useful perspective. / Thesis (PhD) — Boston College, 2021. / Submitted to: Boston College. Graduate School of Arts and Sciences. / Discipline: Physics.
|
260 |
Uncertainty Quantification in Neural Network-Based Classification ModelsAmiri, Mohammad Hadi 10 January 2023 (has links)
Probabilistic behavior in perceiving the environment and take critical decisions have
an inevitable role in human life. A decision is concerned with a choice among the
available alternatives and is always subject to unknown elements concerning the
future. The lack of complete data, insufficient scientific, behavioral, and industry
development and of course defects in measurement methods, affect the reliability of an
action’s outcome. Thus, having a proper estimation of this reliability or uncertainty
could be very advantageous particularly when an individual or generally a subject
is faced with a high risk. With the fact that there are always uncertainty elements
whose values are unknown and these enter into a processes through multiple sources,
it has been a primary challenge to design an efficient representation of confidence
objectively. With the aim of addressing this problem, a variety of researches have
been conducted to introduce frameworks in metrology of uncertainty quantification
that are comprehensive enough and have transferability into different areas. Moreover,
it’s also a challenging task to define a proper index that reflects more aspects of the
problem and measurement process.
With significant advances in Artificial Intelligence in the past decade, one of the
key elements, in order to ease human life by giving more control to machines, is to
heed the uncertainty estimation for a prediction. With a focus on measurement aspects, this thesis attends to demonstrate how a
different measurement index affects the quality of evaluated predictive uncertainty
of neural networks. Finally, we propose a novel index that shows uncertainty values
with the same or higher quality than existing methods which emphasizes the benefits
of having a proper measurement index in managing the risk of the outcome from a
classification model.
|
Page generated in 0.0474 seconds