evilC, on 12 April 2016 - 10:49 AM, said:
"I abandoned trying to emulate a mouse though due to the enormous pain of guestimating maximum twist rates for each mech"
I basically have this one licked. To detect max twist rate, I send a big move (eg +50), then the same amount in the opposite direction but as a series of small moves (ie 50x -1). I then take a screenshot, hit center torso, then take another screenshot. If the two screenshots match, then 50 is below the max twist rate, so the code increases the rate and repeats.
Detecting twist range is also done with pixel detection. I center the torso, then keep twisting until the screen doesnt change after I issue mouse movement, which gives me the twist range in mouse units.
This code is not currently in the zip in my thread, I have some more work to do on it before I release it, hopefully tonight.
You are correct about the bit of code that controls the "buffer". To avoid any confusion, here is the logic that it follows:
I will use fake values to make the math simpler:
TWIST_RATE : The number of mouse units that a mech can twist in one tick (10ms for me). Lets use a value of 20 for this.
TWIST_RANGE: The number of mouse units that a mech can twist. Lets assume that this is 100 units from center to twisted full right.
Let's also assume that a joystick reports -100 to +100 (So center to full twist right is also 100 units)
Current mouse "position" is 0
Joystick is 50% right (+50)
Corresponding mouse move to achieve that view would be +50
TWIST_RATE is 20, so we instead only move +20
End of loop
Next "tick".
Current mouse "position" is +20
Joystick has moved to 30% right
Corresponding mouse move to achieve that view would be +10
+10 is below 20, so allow the full amount that it "wants" to move.
This code is simple in AHK, because I can only poll the joystick manually anyway (Joystick axis input in AHK is not event based), so even if the stick does not change, I get a current value each "tick". If you get joystick position in an event-based manner, then implementing this may be a bit more tricky.
Looking at that code you posted, there seems to be rather a lot of logic going on for POV hats. This is an issue that I have come up against in AHK, and I found that generally the best solution was look up tables (ie prebuilt static arrays) - conversions such as degrees to cardinal directions (At least with a normal POV where the degrees are always in 45deg increments) is actually quite quick and simple with arrays. Lemme know if you need some pointers.
The POV stuff wasn't all me. I extended the code, and although I would have coded it slightly differently it works. Part of my intention for the program is for others to contribute, so I didn't want to erase their work.
As far as your pseudocode, I actually really like the general idea of it. Taking it a step further. You could completely automate the process. I know how to do it in C#, but not in AHK. Apparently AHK does have some image processing capabilities, but I wasn't able to fully understand the extent of them. Programmatically hit the printscreen key and then open the image from within the program using something simple like OpenCV to programmaticallly compare the images. I'm going to log off for a while, but good luck.