• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7
  • 1
  • Tagged with
  • 8
  • 8
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

An Image Processing and Pattern Analysis Approach for Food Recognition

Pouladzadeh, Parisa 21 January 2013 (has links)
As people across the globe are becoming more interested in watching their weight, eating more healthily, and avoiding obesity, a system that can measure calories and nutrition in everyday meals can be very useful. Recently, there has been an increase in the usage of personal mobile technology such as smartphones or tablets, which users carry with them practically all the time. In this paper, we proposed a food calorie and nutrition measurement system that can help patients and dieticians to measure and manage daily food intake. Our system is built on food image processing and uses nutritional fact tables. Via a special calibration technique, our system uses the built-in camera of such mobile devices and records a photo of the food before and after eating it in order to measure the consumption of calorie and nutrient components. The proposed algorithm used color, texture and contour segmentation and extracted important features such as shape, color, size and texture. Using various combinations of these features and applying a support vector machine as a classifier, a good classification was achieved and simulation results show that the algorithm recognizes food categories with an accuracy rate of 92.2%, on average.
2

An Image Processing and Pattern Analysis Approach for Food Recognition

Pouladzadeh, Parisa 21 January 2013 (has links)
As people across the globe are becoming more interested in watching their weight, eating more healthily, and avoiding obesity, a system that can measure calories and nutrition in everyday meals can be very useful. Recently, there has been an increase in the usage of personal mobile technology such as smartphones or tablets, which users carry with them practically all the time. In this paper, we proposed a food calorie and nutrition measurement system that can help patients and dieticians to measure and manage daily food intake. Our system is built on food image processing and uses nutritional fact tables. Via a special calibration technique, our system uses the built-in camera of such mobile devices and records a photo of the food before and after eating it in order to measure the consumption of calorie and nutrient components. The proposed algorithm used color, texture and contour segmentation and extracted important features such as shape, color, size and texture. Using various combinations of these features and applying a support vector machine as a classifier, a good classification was achieved and simulation results show that the algorithm recognizes food categories with an accuracy rate of 92.2%, on average.
3

An Image Processing and Pattern Analysis Approach for Food Recognition

Pouladzadeh, Parisa January 2013 (has links)
As people across the globe are becoming more interested in watching their weight, eating more healthily, and avoiding obesity, a system that can measure calories and nutrition in everyday meals can be very useful. Recently, there has been an increase in the usage of personal mobile technology such as smartphones or tablets, which users carry with them practically all the time. In this paper, we proposed a food calorie and nutrition measurement system that can help patients and dieticians to measure and manage daily food intake. Our system is built on food image processing and uses nutritional fact tables. Via a special calibration technique, our system uses the built-in camera of such mobile devices and records a photo of the food before and after eating it in order to measure the consumption of calorie and nutrient components. The proposed algorithm used color, texture and contour segmentation and extracted important features such as shape, color, size and texture. Using various combinations of these features and applying a support vector machine as a classifier, a good classification was achieved and simulation results show that the algorithm recognizes food categories with an accuracy rate of 92.2%, on average.
4

Cloud Computing Frameworks for Food Recognition from Images

Peddi, Sri Vijay Bharat January 2015 (has links)
Distributed cloud computing, when integrated with smartphone capabilities, contribute to building an efficient multimedia e-health application for mobile devices. Unfortunately, mobile devices alone do not possess the ability to run complex machine learning algorithms, which require large amounts of graphic processing and computational power. Therefore, offloading the computationally intensive part to the cloud, reduces the overhead on the mobile device. In this thesis, we introduce two such distributed cloud computing models, which implement machine learning algorithms in the cloud in parallel, thereby achieving higher accuracy. The first model is based on MapReduce SVM, wherein, through the use of Hadoop, the system distributes the data and processes it across resizable Amazon EC2 instances. Hadoop uses a distributed processing architecture called MapReduce, in which a task is mapped to a set of servers for processing and the results are then reduced back to a single set. In the second method, we implement cloud virtualization, wherein we are able to run our mobile application in the cloud using an Android x86 image. We describe a cloud-based virtualization mechanism for multimedia-assisted mobile food recognition, which allow users to control their virtual smartphone operations through a dedicated client application installed on their smartphone. The application continues to be processed on the virtual mobile image even if the user is disconnected for some reason. Using these two distributed cloud computing models, we were able to achieve higher accuracy and reduced timings for the overall execution of machine learning algorithms and calorie measurement methodologies, when implemented on the mobile device.
5

A Deep Learning and Auto-Calibration Approach for Food Recognition and Calorie Estimation in Mobile e-Health

Kuhad, Pallavi January 2015 (has links)
High calorie intake has proved harmful worldwide, as it has led to many diseases. However, dieticians have deemed that a standard intake of number of calories is essential to maintain the right balance of calorie content in human body. In this thesis, we consider the category of tools that use image processing to recognize single and multiple mixed-food objects, namely Deep Learning and the Support Vector Machine (SVM). We propose a method for the fully automatic and user-friendly calibration of the sizes of food portions. This calibration is required to estimate the total number of calories in food portions. In this work, to compute the number of calories in the food object, we go beyond the finger-based calorie calibration method that has been used in the past, by automatically measuring the distance between the user and the food object. We implement a block resize method that uses the measured distance values along with the recognized food object name to further estimate calories. While measuring distance, the system also assists the user in real time to capture an image that enables the quick and accurate calculation of the number of calories in the food object. The experimental results showed that our method, which uses deep learning to analyze food objects, led to an improvement of 16.58% in terms of recognition, over the SVM-based method. Moreover, the block resize method showed that percentage error for calorie estimation was reduced to 3.64% as compared to 5% achieved in previous methods.
6

Leveraging contextual cues for dynamic scene understanding

Bettadapura, Vinay Kumar 27 May 2016 (has links)
Environments with people are complex, with many activities and events that need to be represented and explained. The goal of scene understanding is to either determine what objects and people are doing in such complex and dynamic environments, or to know the overall happenings, such as the highlights of the scene. The context within which the activities and events unfold provides key insights that cannot be derived by studying the activities and events alone. \emph{In this thesis, we show that this rich contextual information can be successfully leveraged, along with the video data, to support dynamic scene understanding}. We categorize and study four different types of contextual cues: (1) spatio-temporal context, (2) egocentric context, (3) geographic context, and (4) environmental context, and show that they improve dynamic scene understanding tasks across several different application domains. We start by presenting data-driven techniques to enrich spatio-temporal context by augmenting Bag-of-Words models with temporal, local and global causality information and show that this improves activity recognition, anomaly detection and scene assessment from videos. Next, we leverage the egocentric context derived from sensor data captured from first-person point-of-view devices to perform field-of-view localization in order to understand the user's focus of attention. We demonstrate single and multi-user field-of-view localization in both indoor and outdoor environments with applications in augmented reality, event understanding and studying social interactions. Next, we look at how geographic context can be leveraged to make challenging ``in-the-wild" object recognition tasks more tractable using the problem of food recognition in restaurants as a case-study. Finally, we study the environmental context obtained from dynamic scenes such as sporting events, which take place in responsive environments such as stadiums and gymnasiums, and show that it can be successfully used to address the challenging task of automatically generating basketball highlights. We perform comprehensive user-studies on 25 full-length NCAA games and demonstrate the effectiveness of environmental context in producing highlights that are comparable to the highlights produced by ESPN.
7

Towards a Smart Food Diary : Evaluating semantic segmentation models on a newly annotated dataset: FoodSeg103

Reibel, Yann January 2024 (has links)
Automatic food recognition is becoming a solution to perform diet control as it has the ability to release the burden of self diet assessment by offering an easy process that immediately detects the food elements in the picture. This step consisting of accurately segmenting the different areas into the proper food category is crucial to make an accurate calorie estimation. In this thesis, we utilize the PREVENT project as a background to the task of creating a model capable of segmenting food. We decided to carry out the research on a newly annotated dataset FoodSeg103 that consists of a more data-realistic support for the implementation of this study. Most papers performed on FoodSeg103 focus on Vision transformer models that are seen as very trendy but also with computational constraints. We decided to choose DeepLabV3 as a dilation-based semantic segmentation model with main objective of training the model on the dataset and additionally with hope of improving the state-of-the-art results. We set up an iterative optimization process with purpose of maximizing the results and managed to attain 48.27% mIOU (also mentioned as "mIOU all" in the thesis). We also obtained a significant difference in average mIOU troughout all random search experiments in comparison to bayesian optimization experiments.This study has not overpassed the state-of-the-art performance but has managed to settle 1% behind, BEIT v2 Large remaining in first position with 49.4% mIOU.
8

Image Comparing and Recognition : Food Classification

Häggqvist, Victor, Lundberg, Peter January 2015 (has links)
Bildigenkänning och jämförelse är ett ämne som har varit i fokus under en lång tid inom datavetenskap. Många företag har försökt att skapa produkter, som utnyttjar olika lösningar för att känna igen objekt och människor. Dock har ingen lyckats skapa en lösning som kan göra detta felfritt. Lifesum vill ha en lösning till deras kaloriräknarapplikation. Denna ska erbjuda användaren möjligheten att fotografera en maträtt, för att sedan kunna ta fram vilken maträtt som bilden illustrerar. Histogramjämförelse är ett av lösningsalternativen, dock inte den mest optimala bildjämförelsealgoritmen. Att använda en algoritm som utnyttjar nyckelpunktsdetektion är den mest optimala lösningen, om träning av algoritmen är ett alternativ. En av idéerna för att öka precisionen är att låta användaren välja mellan de fem bästa maträtterna som algoritmen rekommenderar. På så sätt ökar man sannolikheten att maträtten som söks är en av de rekommenderade maträtterna. Framtida arbeten inom detta ämne kan involvera forskning i hur träning utav HOG, Histogram of Oriented Gradients, algoritmen skulle fungera. Detta för att få ett bättre resultat som låter FLANN, Fast Approximate Nearest Neighbor Search Library, algoritmen arbeta snabbare. / Image recognition and comparison is a topic that has been in focus for a long time within computer science. Many companies have tried to create products that use different solutions to recognize objects and people. However, none of these companies have managed to create a solution that can do this flawlessly. Lifesum want a solution to their calorie counting application. This will offer the user the opportunity to take a picture of a dish and then be able to retrieve which dish the image illustrates. Histogram comparison is one solution to this problem, thought not the most optimal one. Using an algorithm that uses keypoint detection is the most optimal solution, if training of the algorithm is an option. One of the ideas to improve the precision is to allow the user to choose between the five best dishes that the algorithm recommends. In this way one increase the probability of that the wanted dish is one of the recommended dishes. Future work in this topic can involve researching on how training the HOG, Histogram of Oriented Gradients, algorithm would work, to get a better result that could let the FLANN, Fast Approximate Nearest Neighbor Search Library, algorithm work faster.

Page generated in 0.1094 seconds