I am working on an AI-powered fitness and food tracker that automatically logs your food based on the photo. One of the difficulties I had when going to the gym is keeping up and sharing macros weekly with my personal trainer. Manually logging food is a hassle and massive pain point - so my app, Eat n Snap attempts to solve this problem. You can also set weight and BMI goals and see your progress on a weekly basis.
I suppose you must know this already if you have done research on alternatives, but there are already a plethora of apps like this — Lifesum, Cal AI, MacroFactor, just to name a few.
Good question! I have added better UX features like the ability to record a voice memo at the end of the day to automate the logging. It would be helpful for someone like a busy professional who is having a hard time to take time out of the day to log his food at every meal... So during his downtime, or before he goes to bed - he can quickly jolt down what he ate and then AI would summarise log it for him.
I think this is hard with only a photo because you can’t always see what’s inside. But I’ve always dreamed of something like this paired with some kind of affordable hardware scanner that can get just enough data to fill in the blanks from the photos.
This is already a feature in an app called MacroFactor. But there is definitely room for improvement in the field.
One thing that I miss in MacroFactor is that it should have some memory of my previous choice.
Example:
If I take a picture of a glass of milk, it always assumes it to be whole milk (3.5% fat). Then I change it to a low fat milk (0.5% fat). But no matter how many times I do that, it keeps assuming that the milk in the photo is whole milk.
Sign up for my waitlist (or DM me if you want to know more) here: https://www.getsnapneat.com