I plan to combine machine learning for pcomp with the final assignment of the nature of code.
The idea for the P5 part is to use Toxiclibs.js to write an elastic grid image. At the same time, the variables that control wind, elasticity and gravity are mapped to the slider, so that the experiencer can change the strength at any time to experience different visual effects.
And the idea for the ML part is that I want to use the examples of Hand pose tracking and Hand pose tracking + Neural Network in the machine learning class to control the lock and unlock of the corners of the elastic network through the opening and closing of the human hand.
here is my inspiration
I learned shiffman's tutorial of toxiclibs.js, and tried to use P5 to create a net affected by elasticity, gravity, and wind. When designing wind, I added Perlin noise to the wind function to simulate a more natural effect. At the same time, use the mouse to control the direction of the lower corner of the net. When mousePressed, the two points in the upper corner are unlocked. When mousereleased, the two points are locked again.
Then I tried the model of hand pose tracking + Neural Network to train the lock (when the five fingers are tight and unlock (when the five fingers are open).
Use the palm position instead of (mouseX, mouseY) so that the hand can control the movement of the net
Final version video
1. In the visual part, I just quoted toxi.physics2D, considering the visual effect after using Vec3D
2. Can it be combined with other poses of ML to replace the slider control, and consider how to optimize the interaction speed?