Mcgral18, on 08 March 2016 - 08:54 PM, said:
And is that any worse than PGI has done in the past?
No, but it's not any better, and that's the issue. Why should I trust you over them? What are you going to fall back on when your ideas don't work? What makes you think your would-be interpretations of any collected data are better? How do you even quantify and qualify your changes before committing them to code?
I do systems and trade-off analysis for a living. Those skills very much apply here.
Quote
I probably over did the isSmalls, and perhaps ERPPC cooldowns, but those are numbers I would very much like to test....but we cannot.
How am I supposed to get data if it's impossible?
You don't actually need that particular data to start, and that's where you are going wrong. See below.
Quote
My framework was looking at under and over performers, and touching them.
That's not robust enough, since that doesn't account for the "why," and it can have the unintended consequence of nullifying other weapons.
You, as an experienced player, should be familiar enough with the game to know about how much each particular weapon characteristic (heat, damage, duration, range, cool-down, etc.) affects that weapon's usefulness on the battlefield. As such, you should be able to create weights for them. If you aren't confident in your own assessment, you can take a poll among trusted top players (i.e. SJR, EmP, etc.) to get an educated average on the level of importance for those characteristics.
You should also be capable of assessing how much more or less of a benefit you gain by increasing or decreasing the value for those characteristics. For example, we know more range is better, but we also know that after a certain point the amount of return you get for increased range tapers off. We also know that range becomes increasingly more punishing on the player the shorter it gets below another certain point. Therefore, the amount of benefit for range does not follow a linear pattern. Again, if you aren't confident, take a poll.
Then, you take your performance curves, and you set them all onto a common scale (translate the values to a 0-1 scale, for instance, using the governing equations for each). You then use those weights to take a weighted average to get a score for the weapon. This score will not tell you
in what role the weapon is useful, only that (assuming they all have the same score) the weapons have an equal place on the battlefield.
Yes, it's work to do this. It's not as easy as just following your gut instinct and plugging in some new numbers that look about right and then farming for favs on the forums because Buckus Willylicker thinks that having his isML be fulfilling a comparable role as the cERML is the greatest thing since sliced bread.
For the record, and if it makes you feel better, I
did do all of this. I
did generate governing equations, and I
did get numbers very similar to yours if I chose to balance that way. But I could also tweak the numbers in other ways to get the same score. I can make my isML stay short-reached but shoot colder, faster, and shorter and retain a niche where it can completely out-class the cERML just as the cERML retains a niche where it completely out-classes the isML.
Equitable trade-offs are the name of the game, and you have no idea what the actual value of the trade-offs you are making will be.