I implemented the algorithm on a phone (code here). In the screenshot, the markers 1 and 2 are landmarks, identified and outlined in green with OpenCV library code, and then the phone uses their positions and the accelerometer data to predict where the control markers 3 and 4 are on the screen, outlining them in red.
For someone like me who does some philosophy of science, it was an interesting experience to actually do a real experiment and collect data from it.
I am planning at some point to try to implement the algorithm using infrared LEDs under a TV and the accelerometer and infrared camera inside a right Nintendo Switch joycon. To that end, over the last couple of days I've reverse-engineered two of the joycon infrared camera blob identification modes.
No comments:
Post a Comment