Jump to content

A Refined Dynamic Battle Value System


No replies to this topic

#1 Amaris the Usurper

    Member

  • PipPipPipPipPip
  • Ace Of Spades
  • Ace Of Spades
  • 100 posts

Posted 07 March 2014 - 01:25 AM

In this post, I outline a matchmaking system that
  • should tend to produce well-balanced matches, and
  • does not restrict the kinds of equipment players can bring to the field.
More specifically, there are neither hard restrictions on chassis/equipment combinations nor increased waiting times for highly-optimized (or merely popular) builds. The system is self-correcting and does not require arbitrary battle value (BV) assignments by the developer.

First, some motivation. The current matchmaking system (if I am not mistaken) attempts to match the summed Elo ratings of each team. There is a separate rating for each weight class and player. No attempt is made to distinguish between builds or to account for the advantages of teamplay except by limiting the maximum team size. As a result, there is a strong incentive to run top-tier builds in groups of four. If you elect not to do this, then someone else will, to his advantage and your loss.

Currently, despite recent balancing efforts, "top-tier" means "jump-capable with ACs and PPCs" (i.e., pinpoint alpha weapons) or "fast and agile with massed MLs." Previously, the heavy end of the metagame has been dominated by 2 PPC + GR or 4 PPC builds; before that, massed SRMs were practically mandatory on mediums and up, and overpowered Streaks made the RVN-3L the dominant light. There have even been times when massed LRMs were a dominant strategy, despite the negligible skill required to operate them (and consequent boredom).

Considered outside the context of the prevailing metagame, there is nothing inherently wrong with any of the build choices just mentioned. Neither is there anything wrong with teamwork. Rather, in general terms, the problems are that
  • players must choose one of a few dominant builds or lose frequently, so that customizability--such a large part of the appeal of the MW franchise--is in reality very limited, and
  • solo players must often face off against organized groups with similar equipment, resulting in quick-slaughter scenarios that involve little fun or challenge for either side.
Both problems result from the inadequacies of the matchmaking system.

The alternative system that I propose is simple in concept. We start from the idea that there is some orderly way of predicting the victor in a given match. For example, we might compute an aggregate battle value for each team, based on both 'mech type and player skill, e.g.,

[Team BV] = [BV of 'Mech 1] * [Skill of Player 1] + ... + [BV of 'Mech 12] * [Skill of Player 12],

and compare the BVs to find the likely victor. The matchmaker would attempt to equalize BVs to within some margin. Multiple groups of arbitrary size could be accommodated by applying multipliers to individual skill ratings, one for each group size between 2 and 12. Individual variants could even be subdivided to allow better resolution (e.g., HGN-733Cs relying on lasers and SRMs would be distinguished from HGN-733Cs relying on pinpoint alpha, so that one does not have to use the top-tier build or play at a disadvantage). For definiteness, the 'mech BVs, player skill ratings, and other parameters would be required to lie between 0 and 1 (this involves, in fact, no restriction). I am not arguing that this system is uniquely the best (although it seems reasonable); it is merely an example.

The question is: How do we determine the constants appearing in the predictor model so that it makes accurate predictions?

Clearly, we can plug in arbitrary values for the constants and then try the predictor out against every game that has occurred in a given time period. We can then compute the percentage of correct predictions. This gives a measure of how well the predictor is performing; obviously, with randomly-chosen constants, we expect the performance to be poor.

Unfortunately, the function we are trying to maximize is not smooth at all, so there is no way to use calculus to come up with a "direction" (in higher-dimensional space) in which the parameters should be adjusted to improve the prediction. However, uncooperative problems like this one appear in management and engineering settings all the time, and special methods have been developed to deal with them. For example, genetic algorithms attempt to evolve a solution by mimicking biological evolution, and various machine-learning techniques would also be applicable. The problem is fairly similar to sorting junk email or other datamining applications. There are many other methods; see the Wikipedia article on global optimization.

I realize that this material is likely to be outside the comfort zone of most readers (and perhaps PGI employees), but I encourage you not to dismiss all of this as mere theorizing. The methods that I mention are constantly used to solve real-world problems, and the expertise needed to implement them exists at a reasonable price. Genetic algorithms, for example, lie within the programming abilities of undergraduate engineering students who have had a single computer science course.

In conclusion, my proposal is to
  • employ a BV model (or other predictor) that takes into account 'mech type, player skill, and group size,
  • tune the model for best accuracy by using industry-standard optimization techniques in conjunction with historical data about match outcomes, and
  • use the resulting model to produce balanced matches.

[This space reserved for discussion of a more detailed predictor model involving offensive and defensive BV.]

Edited by Amaris the Usurper, 07 March 2014 - 09:23 AM.






1 user(s) are reading this topic

0 members, 1 guests, 0 anonymous users