You Are What You Eat: So Measure What You Eat!

11 downloads 16 Views 1MB Size Report
... in [26] use three different datasets of images: collected from Instagram, Food-101, and ... Furthermore, obesity is a problem of all ages, so the app must be user ...

You Are What You Eat: So Measure What You Eat! Parisa Pouladzadeh, Shervin Shirmohammadi, Abdulsalam Yassine University of Ottawa, Canada Measuring food calorie and nutrition intake on a daily basis is one of the main tools that allows dieticians, doctors, and their patients to control and treat obesity, overweightness, or other food-related health problems. Yet doing this measurement correctly and on a daily basis is challenging, and one of the main reasons why diet programs fail. In this article, we look at calorie intake measurement techniques, and we cover both traditional and newer methods, with emphasis on the latter. Among the newly proposed methods, Vision Based Measurement (VBM) [33] has gained a lot of attention, because it makes it very easy for users to measure their food’s calorie and nutrition by simply taking a picture of their food with their smartphone. However, this still faces challenges, such as achieving higher measurement accuracies, recognizing complex food items such as mixed food, lack of sufficient processing power, etc. When measuring food calorie with VBM, recognition of the food is a particularly difficult process because food items take different variations in shape and appearance. Furthermore, the algorithms used for food recognition and classification are computationally intensive. Several solutions and architectures have been proposed to tackle these challenges, which we will cover in this article. According to World Health Organization (WHO), in 2014 more than 1.9 billion adults were overweight, including 600 million who were obese [1]. This is a serious health problem, because WHO reports that obesity and overweightness leads to at least 2.8 million deaths each year, about 35.8 million Disability-Adjusted Life Years, adverse metabolic effects on blood pressure, cholesterol, triglycerides and insulin resistance, higher risks of coronary heart disease, increase in ischemic stroke and type 2 diabetes, and higher risk of cancer in breast, colon, prostate, endometrium, kidney and gall bladder [35]. The most important strategy to confront such epidemic, as proven by scientists, is diet management. While people in general understand the links between diet and health, there are many factors that prevent them from taking control of their eating habits. For example, many people find it difficult to examine all the information about nutrition and dietary choices, although there is widespread nutritional information and guidelines available to them. Furthermore, people are oblivious about measuring or controlling their daily calorie intake due to the lack of nutritional knowledge, regular eating patterns, or self-control. Empowering patients with an effective long-term solution requires

technological mechanisms that help them make permanent changes to their dietary quality, encourage them to eat healthily and monitor their calorie intake in order to reduce their risk of developing dietrelated illnesses. A critical part in any diet management plan is measuring the number of calories of daily food intake. This is not only crucial for those who want to lose weight in a healthy way, but also for those who want to maintain a healthy weight. In fact, all existing obesity treatment techniques require the patient to record all food consumed per day, in order to compare food intake to consumed energy, both measured in number of calories. However, in most cases, unfortunately patients face difficulties in measuring calorie intake due to reasons such as self-denial, lack of nutritional information, the tedious process of manually recording the information, and others. Fortunately there are many approaches to mitigate these difficulties, which we will discussed next.

Calorie Measurement Approaches Existing calorie measurement approaches can be put into four main categories: Traditional Clinical Methods, Smart Environments, Dedicated Portable Systems, and Smartphone Based System. Each of these has its own strengths and weaknesses, as presented next.

Traditional Clinical Methods The man clinical method is the 24-hour dietary recall (24HR) method, in which the patient is asked to remember and report all food consumed in the previous 24 hours. The recall is normally prepared in person or with telephone interview. The interview usually needs specific probes to help the patient remember all foods consumed in the day. In this method, the interviewer investigates daily reports to help the patient come up with a better program for next days [2]. While helpful, this method has a major drawback related to underreporting. In [3] for example, it has been shown that features such as obesity, gender, and education, seeming health status, age and ethnicity are underreported. In [4], we see that that important information about food portions are underreported. Underreporting of food intake is discussed in other studies such as [5]. It has been observed that portion sizes have grown considerably in the past 20 to 30 years [4],[5] and this may be a contributor to underreporting. Obviously, there is a need for methods for more accurate measurement of dietary information.

Smart Cooking Systems

To reduce or eliminate underreporting in 24HR, smart cocking systems such as a smart kitchen were proposed, such as in [6]. In this approach, Calorie-Aware Kitchens are designed that include cameras to increase the awareness of choosing healthy food and the amount of calories in the prepared food. The kitchen includes a camera overhead to capture images of the food preparation process, while there are sensors connected to the counter and stove to measure all the ingredients and cover most of the places inside the kitchen. This will result in immediate feedback for the user with a suggestion for the suitable amount of calories intake. The major downsides to this approach are the limited usage, the inability to “carry” the smart kitchen with you, and the inability for the smart kitchen to be used outside the home. Other systems have been proposed instead. Nishimura and Kuroda have suggested a wearable sensor system by using a microphone [7], while researchers in [8] evah created a dietary aware dining table for diary intake consumption. They used radio frequency identification (RFID) as a surface sensor to gauge the type of food eaten and integrated existing scales on a dining table to measure food weight. But there are many drawbacks related to this technique, including the difficulty in using it in several locations and the complexity of attaching the RFID tag to each served food.

Dedicated Portable System To resolve the issue of immobility of smart kitchens, some portable systems have been proposed. In [28], we see a system which automatically reads the number of calories. The main idea is using bioimpedance to measure glucose in the user’s cells. But many scientists seriously doubt the validity of such approach, and the system has not been properly evaluated on a wide scale. Another clear disadvantage of this system is that it only measures the calories after the user has already eaten the food, which is too late for obesity patients who need to know the amount of calories before they eat the food. Near infrared (NIR) spectroscopy has also been recently proposed to determine food’s composition, with [29] and [30], , as commercial examples already available in the market. These systems can tell to the user, for example, the amount of saturated fats per 100 grams in the food, which, when coupled with a good quality lookup database of food ingredient spectra, can give information in terms of calories and nutrition. However, these tools cannot measure the weight and amount of each food ingredient. As such, going from the “fats per 100 grams” value to actual calories in the food is not trivial for the users. In addition, transparent liquids cannot be measured with NIR spectroscopy unless

special containers are used.

Smartphone Based VBM Systems Recently, VBM has made it possible for a person to take a picture of a plate of food using a smartphone and measure the number of calories. Calorie measurement using VBM is a difficult case of object recognition. The difficulty of this problem is mainly due to the fact that food portions come in different sizes and shapes. Also, food portions can be single items, multiple items, or mixed food such as salad, soup, etc. Consequently, some mixed food images are difficult to be accurately measured with VBM with a high degree of success. Another issue is related to the processing time since most recognition and segmentation algorithms are computationally intensive and require faster processors and higher memory. Like other VBM systems, food calorie measurement using VBM is also divided into four main stages: Preprocessing, Image Analysis, Measurand Identification, and Measurement. These are shown in Figure 1, which follows the same VBM architecture as shown in [33]. Let us now examine each stage.

Figure 1. Various stages of VBM calories measurement. Left to right: food image is acquired by a smartphone, and is fed to image processing (green), computational intelligence (violet), and measurement (yellow) operations. The figure intentionally follows the same architecture as shown in [33], to show the mapping of food calorie measurement to a generic VBM system.

1) Pre-processing

In this stage the raw food image is prepared for the next stages. Any glare, noise, blurs, etc. can be removed here. In addition, operations such as normalization, thresholding, denoising, and picture manipulations such as resizing, cropping, etc. are performed at this stage if needed[33].

2) Food Portion Segmentation Segmentation will determine the boundaries of food portions inside the meal. The ideal output of the image segmentation operation is to group those pixels in the image which share certain visual characteristics that are perceptually meaningful to human observers. This is a difficult problem as humans use a complicated yet subconscious process to perform this task. Various methods of segmentation are used in different food applications, such as: Color and Texture Segmentation, K-mean Clustering, Graph Cut Based Segmentation.

3) Food Recognition In this step, the extracted features of each food portion are classified in order to recognize the portion by applying different classification methods such as SVM, Neural Network, and Deep Learning. Also by using cloud space and applying all these methods in the virtual space, the accuracy and time processing will have a huge increase in all food applications.

4) Calorie Measurement Once the food is recognized (chicken, beef, pasta, etc.), existing nutritional tables are used to calculate its calorie. These tables require the amount of food, in grams, to give a final answer. Therefore, we must not only recognize the food, but also measure its mass. To do the latter, we can use approaches such as the one suggested in [13], which uses the user’s thumb, shown in Figure 1, as a calibration reference and calculates the area, then volume, then mass of the food portion using existing food density tables. Another approach uses the distance between the smartphone and the food portion, and uses that distance to measure the food portion’s area and consequently its mass [34]. Sources of uncertainties must also be taken in to account in this stage, to give a meaningful and accurate measurement result.

5) Current VBM Implementations

Let us now take a look at some existing implementations. In [9], Jie Yand et al. created a method to identify fast-food intake from video of eating by using a wearable cameras. In this method, a number of captured images from a fast-food restaurant are compared to images stored in a computer database. Authors placed the camera in three different locations and trained their system with 101 food types. Related to this model, Jieun Kin in [10] proposed a method for automatically estimating the amount of a given nutrient or calorie contained in commercial food. The method applies when no part of any ingredient is removed in the preparation process. First, the system automatically finds the amount of each ingredient used to prepare the food using the information provided on its label along with the nutrition information for at least some of the ingredients. Then, it utilizes the Simplex algorithm to refine these amounts on the nutrient content. By applying different methods of image processing and segmentation algorithms, calorie measurement systems have been able to increase their accuracy and processing time. In [11], the authors used image processing techniques to measure and analyze large food portions. A similar approach is also reported in [12], where the idea is to take pictures of the food, and based on a calibration card located inside the picture as a measurement pattern, the size of the food portions is calculated. Here, the food is manually identified with the help of nutritional information retrieved from a database. Then, the calories are calculated for each picture and finally the complete set of information is stored in different databases in the research facility. In this case, based on the known size of the calibration card, the portions’ size and consequently calorie can be measured. In[13] and [14], food images taken by the user’s smartphone are first sent to a pre-processing step. Then, at the segmentation step, color and texture segmentation is used to extract the food portions. For each detected food portion, a feature extraction process has to be performed. In this step, various food features including size, shape, color and texture will be extracted. The extracted features are then sent to the classification step where, using a Support Vector Machine (SVM), the food portion will be identified. Finally, by estimating the area of the food portion and using some nutritional tables, the calorie value of the food will be measured., In addition to the above, researchers in [15] use Neural Networks (NN) to measure calorie from food image. In this approach, they capture a photo of several dishes in a tray before and after eating. Particularly, an image of the whole tray is captured first. Then this image will be converted to a binary image by using threshold values, and a small image of the food will be extracted from the tray image. Due to the previous procedures, the system will identify all information related to the image such as

length, width and shape. All previous information is transferred to the NN. The NN results are then sent to a simulation program to compare the information and to analyze the results. But this method is difficult for the user to follow. The user must capture several photos. Moreover, the food image needs to be analyzed by a computer, which is impractical for everyday usage. In [16] we see a method that automatically locates and identifies food in a variety of images. Two concepts were combined in this method. First, a set of segmented objects are partitioned into similar object classes based on their features, such as tablecloths and background; second, automatic segmented regions are classified using a multichannel feature classification system such as normalized graph cut. This method also uses SVM as their classifier. The final decision is obtained by combining class decisions from individual features. In [17], authors propose an approach of analyzing a food item at the pixel level by classifying each pixel as a certain ingredient, and then using statistics and spatial relationships between those pixel ingredient labels as features in an SVM classifier. Results shows that using pixel ingredient labels to identify food greatly increases classification and measurement accuracy, but at the expense of higher computational cost. The work in [18] introduces a mobile food recognition system where the user draws bounding boxes by interacting with the screen first, and then the system starts food item recognition within the indicated bounding boxes. To recognize them more accurately, each food item is segmented by Graph Cut, a color histogram and SURFbased bag-of-features is extracted, and finally linear SVM and fast kernel is used to classify it into one of the fifty food categories. In addition, the system estimates the direction of food regions where the higher SVM output score is expected to be obtained, and show it as an arrow on the screen in order to ask the user to move the smartphone’s camera. This recognition process is performed repeatedly about once a second. Experiments show an 81.55% measurement accuracy for the top 5 category candidates when the ground-truth bounding boxes are given. Yu Wang in [20] have also developed a dietary assessment system that uses food images captured by a smartphone, and employ recursive Bayesian estimation to incrementally learn from a person’s eating history. Results show an improvement of food classification accuracy by 11% can be achieved. The above mentioned methods are computationally demanding and require processing resources beyond what a typical smartphone can handle. For this reason, several systems use cloud computing for offloading the process of image segmentation and classification. The cloud not only allows for achieving higher accuracy, but also can lower the processing time. An example is the work by Low and et al. [21],

who extended the Graph Lab framework to support dynamic and parallel computation of graphs in the cloud. The systems implements extensions to pipelined locking and data versioning to avoid network congestion and latency, and is successfully deployed on a large Amazon EC2 cluster. In [22], an experimental Page Ranking system, called Pregel, is implemented. The results show that Distributed Graph Lab performs better than Hadoop by 20 to 60 times. Kraska et al. [23] introduce a new distributed machine learning system called MLBase, which allows the declaration of machine learning problems in a very simple way, and implements this algorithm in a distributed and highly scalable manner without extensive systems knowledge. The optimizer in the framework converts ML jobs into an artificial learning plan and returns the best answer to the user by improving the results iteratively in the background. Another system [24] also uses cloud computing for automatic diet management and recipes analysis. Convolutional Neural Network (CNN), also known as Deep Learning, is also used to identify food portions more accurately and more easily. Because of the wide diversity of types of food, image recognition of foods is generally very difficult. However, deep learning has recently been shown to be a very powerful image recognition technique, and more recent food calorie measurement systems have started using CNN. For example, the system in [25] applies a CNN to the tasks of detecting and recognizing food images through parameter optimization. It constructs a dataset of the most frequent food items in a publicly available food-logging system, and uses it with CNN to recognize incoming food objects and hence measure their calories. It shows that CNN performs significantly better than traditional SVM based methods. Another system [27] also uses CNN, and uses a 6-layer deep convolutional neural network to classify food image patches. For each food item, overlapping patches are extracted and classified and the class with the majority of votes is assigned to it. Experiments on a manually annotated dataset with 573 food items justified the choice of the involved components and proved the effectiveness of the proposed system yielding an overall accuracy of 84.9%. Kagaya and Aizawa in [26] use three different datasets of images: collected from Instagram, Food-101, and Caltech256 datasets. They investigate the combinations of training and testing using all three datasets, and they achieve a high accuracy of around 95%.

Table I Comparison of Different Smartphone based VBM Food Recognition Methods

Paper

Architecture

Acquisition

Segmentation

Classification

Techniques

Techniques

-

-

-

Accuracy

Analysis Summary

1- Picture, using [12]

Mobile

calibration card located inside the

Card are needed everywhere, and the picture is sent to a research center to be analyzed, which makes it non real-time.

picture

[18]

Mobile

1 Picture

Bounding Box

Linear SVM

81.55%

[9]

Mobile

Video

SIFT package

-

73%

[19]

Mobile

1-Picture

Bounding Box

[15]

Mobile

1-Picture, Dish Image in the tray

Linear SVM + Fisher Vector

79.2%

Information such as width, length, diameter and shape

NN

70%

Using only linear SVM segmentation.

Recognizes whole food, not food portions. Limited to a database of 101 foods from 9 food restaurants in USA

Using linear SVM and fisher vector together.

Uses food rejection algorithm. Sometimes causes miss-recognition of the correct position of the dish image.

Normalized cut, Salient [16]

Mobile

Picture

Region Detection, Fast

KNN and SVM

Rejection

[20]

[31]

Mobile

Mobile-Server

Picture

1-Picture

Graph based segmentation

color, texture and SIFT

Not

Good segmentation and classification method, but long processing time

Compared

due to not using servers.

Improved KNN and SVM

11% on average

SVM

61.34%

MKL

62.5%

SVM+ RBF Kernel

92.21%

Uses recursive Bayesian estimation to incrementally learn from a person’s eating history.

50 kinds of foods and low number of pictures.

Bag-of-features (BoF), [32]

Mobile-Server

1-Picture

Color histogram, Gabor features and gradient

Complete and accurate segmentation, though final measurement is not accurate.

histogram

[13][14]

Mobile-Server

2-Pictures, User Thumb

K-mean clustering, Color-Texture Segmentation

Complete and accurate segmentation, reasonable accuracy for single food items, but low accuracy for mixed food.

Future Challenges Different mobile phone applications for calorie measurement described in this paper. Through their review, we took a look at different approaches and solutions dealing with different system architectures. However, there are still several challenges related to users’ acceptance studies, mixed food detection in complex meals, computing resources and big data analysis for food image recognition and detection. User acceptance studies: Calorie measurement apps heavily depend on how people perceive them. In order for users to accept these apps, they must be appealing to them, effective, convenient, fast etc. As we know, different kind of people with variety of styles and behaviors are looking for their diet and food calorie measurement application and are eager to use them. Furthermore, obesity is a problem of all ages, so the app must be user friendly and appealing to all ages with different background. In this respect, extensive studies are needed to address the issue of user acceptance, usability of the apps, and their effectiveness in truly helping users succeed in their goals. Food recognition in complex meals: Food detection is occlusion, especially for food items in complex meals. For this reason using deep learning algorithm is advantageous because of its ability to learn high-level efficient features from data. However, deep learning algorithm becomes excessively computational-expensive when applying to high-dimensional data, such as food images, likely due to the learning process associated with a deep layered hierarchy. Furthermore, food images heavily produce raw data, in addition to complicating learning process from data. The application of learning algorithms for food recognition in complex meals remains largely unexplored, and deserves development of novel solutions for addressing the high-dimensionality of food images. Computing resources and big data analysis: Furthermore, working on virtualization of cloud resources and big data analysis may help researchers to achieve a better accuracy and time consuming in food recognition applications. Mobile devices, in particular with their limited storage and computing capabilities are drivers for having services provided by Cloud Computing instead of using software on individual computers. But still some challenges occur in implementing the system in the cloud. As we know the service quality such as availability and performance of the cloud

service is very important in real time applications. Sometimes virtual spaces are not sufficient enough to run the application on that. So choosing the virtual space related to the application features requirement is very important parameter. Also as we discussed previously, time processing is an important parameter in choosing and using cloud spaces. So selecting the cloud spaces with enough processors (ECU) and reasonable price for the system is a huge decision for the app providers.

References [1]

World

Health

Organzation,

“Obesity

and

overweigth”,

Fact

sheet



311,

January

2015,

http://www.who.int/mediacentre/factsheets/fs311/en/

[2]

Y. C. Probst and L. C. Tapsell, "Overview of computerized computerized dietary assessment programs for research and

practice in nutrition education," J. Nutr. Educ. Behav, pp. 20–26, 2005.

[3]

D. H. Wang, D. H. Kogashiwa, and S. Kira, "Development of a new instrument for evaluating individuals’ dietary

intakes," J. Am. Diet. Assoc. 106, pp. 1588–1593, 2006.

[4]

L. Harnack, L. Steffen, D. Arnett, S. Gao, and R. Luepker, "Accuracy of estimation of large food portions,"

J.Am.Diet.Assoc, vol. 104, pp. 804–806, 2004.

[5]

R. Johnson, R. Soultanakis, and D. Matthews, "Literacy and body fatness are associated withunderreporting of energy

intake in US low-income women using the multiple-pass 24-hour recall: a doubly labeled water study," J.Am.Diet.Assoc, vol. 98, pp. 1136–1140, 1998.

[6]

Pei-Yu Chi, Jen-Hao Chen, Hao-Hua Chu, & Jin-Ling Lo, "Enabling Calorie-Aware Cooking in a Smart Kitchen," 3rd

International Conference on Persuasive Technology, 2008.

[7]

J. Nishimura and T. Kuroda, "Human Action Recognition Using Wireless Wearable In-Ear Microphone ," IEEJ, Vol.131,

No.9, Sec.C, pp.1570-1576, 2011.

[8]

K. Chang et al., "The diet-aware dining table: Observing dietary behaviors over a tabletop surface," Proceedings of

Pervasive Computing , 4th International Conference, pp. 366–382, May 2006.

[9]

Jie Yang Wen Wu, "Fast Food Recognition From Videos of Eating for Calorie Estimation," Intl. Conf. on Multimedia and

Expo, IEEE, pp. 1210–3, 2009 .

[10]

Jieun Kim, Mireille Boutin, “Estimating the Nutrient Content of Commercial Foods from their Label Using Numerical

Optimization”, in New Trends in Image Analysis and Processing - ICIAP 2015 Workshops, V. Murino, E. Puppo, D. Sona, M. Cristani, and C. Sansone, Lecture Notes in Computer Science, Springer, Volume 9281,ISBN: 978-3-319-23221-8, pp 309-316, 2015.

[11]

L. Young and M. Nestle, "The contribution of expanding portion sizes to the us obesity epidemic," American Journal of

Public Health, vol. 92, pp. 246-249, 2002.

[12]

R. Patterson, A. Kristal, and C. Cheney S. Rebro, "The effect of keeping food records on eating patterns," Journal of the

American Dietetic Association, vol. 98, pp. 1163-1165, 1998.

[13]

Parisa Pouladzadeh, Shervin Shirmohammadi, and Rana Almaghrabi, “Measuring Calorie and Nutrition from Food

Image”, IEEE Transactions on Instrumentation & Measurement, Vol.63, No.8, p.p. 1947 – 1956, August 2014.

[14]

P.Pouladzadeh, S.Shirmohammadi, A.Yassine, “Using Graph Cut Segmentation for Food Calorie Measurement”, IEEE

International Symposium on Medical Measurements and applications, p.p.1-6, Lisbon, June 2014.

[15]

Takeda, F., et al., “Dish extraction method with neural network for food intake measuring system on medical use”,

Computational Intelligence for Measurement Systems and Applications, July 29-31, pp. 56-59 , 2003.

[16]

Fengqing Zhu, Marc Bosch, Nitin Khanna, Carol J. Boushey, “Multiple Hypotheses Image Segmentation and

Classification With Application to Dietary Assessment”, IEEE Journal of Biomedical and Health Informatics, Vol. 19, NO. 1,pp. 377- 389, January 2015.

[17] http://jaybaxter.net/6869_food_project.pdf [18] Yoshiyuki Kawano, Keiji Yanai, “FoodCam: A Real-Time Mobile Food Recognition System Employing Fisher Vector ”, Springer International Publishing Switzerland, pp. 369–373, 2014.

[19] Kawano, Y., Yanai, K., Rapid mobile object recognition using fisher vector. In: 2013 2nd IAPR Asian Conference on Pattern Recognition (ACPR), pp. 476–480. 2013.

[20] Yu Wang, Ye He, Fengqing Zhu, Carol Boushey and Edward Delp, “The Use of Temporal Information in Food Image Analysis”, in New Trends in Image Analysis and Processing - ICIAP 2015 Workshops, V. Murino, E. Puppo, D. Sona, M. Cristani, and C. Sansone, Lecture Notes in Computer Science, Springer, Volume 9281, 2015, ISBN: 978-3-319-23221-8, pp 317-325.

[21]

Yucheng, et al. Low, "Distributed GraphLab: A framework for machine learning and data mining in the cloud,"

Proceedings of the VLDB Endowment 5.8 , pp. 716-727, 2012.

[22]

Grzegorz, et al. Malewicz, "Pregel: a system for large-scale graph processing," in Proceedings of the 2010 ACM

SIGMOD International Conference on Management of data. ACM, 2010.

[23]

A. Talwalkar, J.Duchi, R. Griffith, M. Franklin, M.I. Jordan T. Kraska, "MLbase: A Distributed Machine Learning System,"

in Conference on Innovative Data Systems Research, 2013.

[24]

Alessandro MazzeiAffiliated , Luca Anselma, Franco De Michieli, Andrea Bolioli, Matteo Casu, Jelle Gerbrandy, Ivan

Lunardi, “Mobile Computing and Artificial Intelligence for Diet Management”, in New Trends in Image Analysis and Processing ICIAP 2015 Workshops, V. Murino, E. Puppo, D. Sona, M. Cristani, and C. Sansone, Lecture Notes in Computer Science, Springer, Volume 9281, 2015, ISBN: 978-3-319-23221-8, pp 342-349.

[25]

H. Kagaya, K.Aizawa, M. Ogawa, "Food Detection and Recognition Using Convolutional Neural Network " , in ACM

Multimedia Conference, Orlando, Florida, 4 pages, 2014.

[26] Hokuto Kagaya , Kiyoharu Aizawa, “Highly Accurate Food/Non-Food Image Classification Based on a Deep Convolutional Neural Network” , in New Trends in Image Analysis and Processing - ICIAP 2015 Workshops, V. Murino, E. Puppo, D. Sona, M. Cristani, and C. Sansone, Lecture Notes in Computer Science, Springer, Volume 9281, 2015, ISBN: 978-3-319-23221-8, pp 350-357.

[27]

Stergios Christodoulidis, Marios Anthimopoulos, Stavroula Mougiakakou, “Food Recognition for Dietary Assessment

Using Deep Convolutional Neural Networks”, in New Trends in Image Analysis and Processing - ICIAP 2015 Workshops, V. Murino, E. Puppo, D. Sona, M. Cristani, and C. Sansone, Lecture Notes in Computer Science, Springer, Volume 9281, 2015, ISBN: 978-3-319-23221-8, pp 458-465.

[28]

http://healbe.com/

[29]

http://www.tellspecopedia.com

[30]

http://www.consumerphysics.com/myscio

[31]

Joutou, T., Yanai, K.: A food image recognition system with multiple kernel learning. In: 2009 16th IEEE International

Conference on Image Processing (ICIP), pp. 285–288. 2009

[32]

Hoashi, H., Joutou, T., Yanai, K.: Image recognition of 85 food categories by feature fusion. In: 2010 IEEE International

Symposium on Multimedia (ISM), pp. 296–301, 2010.

[33]

S. Shirmohammadi and A. Ferrero, “Camera as the Instrument: The Rising Trend of Vision Based Measurement”, IEEE

Instrumentation and Measurement Magazine, Vol. 17, No. 3, June 2014, pp. 41-47.

[34]

P. Kuhad, A. Yassine, and S. Shirmohammadi, “Using Distance Estimation and Deep Learning to Simplify Calibration in

Food Calorie Measurement”, Proc. IEEE International Conference on Computational Intelligence and Virtual Environments for Measurement Systems and Applications, Shenzhen, China, June 12-14 2015, 6 pages.

[35]

World

Health

Organization,

http://www.who.int/gho/ncd/risk_factors/obesity_text/en/

“Obesity

situation

and

trends”,