My final topic is about female bulimia. This topic is also a research topic in my social innovation class. I found that women are more likely to suffer from bulimia than men. The reasons behind it are mainly women’s psychological and psychological problems. Physiologically, there is a greater chance of overeating than men, but society’s body requirements for women are far greater than for men.
I began to try to express that the overeating of bulimia patients is often to vent their inner hurt feelings. Looking back on the sharing of various models in the Gene's class, I am thinking about how machine learning can help me develop my theme. The visual effects implemented in the Midas models in Runway are consistent with the theme I want to express.
So my first attempt is to download high-calorie food photos, and then repeatedly use MiDas to train.Below is the video effect I got：
But the way I change between pictures is not the result I want, so I try to use runaway to train my own model.
Because I used a variety of food pictures to train the model, its effect was not very good. As shown in the video above, it can hardly see the outline of the food, so I later positioned the food type to only train burgers Picture, I think it is high calorie, can make people feel hungry. Here are the results i got:
If the follow-up can be developed, I hope to collect food photos of real bulimia patients and train them.
If the food in the photo is of multiple types, what is a better generative model instead?
For the interactive part, how to recognize only the person's chewing sound, or how to recognize whether the person is eating?
After that, I tried to import the video into TouchDesigner, and tried to control the playback speed of the video through the human voice. For example, the human chewing sound will speed up the video playback. The following is my demo.