Viktor Drake, on 13 June 2017 - 05:41 PM, said:
Oh god, we have a WoT player here.
Statistically with 12 players per team, unless you are a MWO god, there is always going to be at least one player on the enemy team as skill or more skilled that you, pretty much guaranteed however only one of you can win the battle. Another way to think of it is what happens if Michael Jordan plays an exact duplicate of himself? The answer is which ever clone is luckier or makes the least amount of mistake that particular game wins the match. Now take those same two clone and add 4 random members to each of their teams and have them face off. Guess who wins now? Yep, the TEAM that has the most skilled players. What is my point? The TEAM is what determines win or loss not you, not by yourself.
Lol ... nope never touched WoT.
However, I do math and science for a living.
The idea behind team based Elo calculations is that each player has a rating. This rating contributes to their team value (whether the player is Michael Jordan or a street-side pick up player) and the value of the rating is unknown at the beginning of the analysis so they are given a common starting point.
There are two pieces of information available.
1) The current estimated skill rating of every player in a match .. and thus the cumulative current rating for each team.
2) The match outcome - which team wins and which loses - I don't care who is on which team.
Assuming that other factors are equal the more skilled team will usually win the match. These team values are used to set the range of change assigned to the winning and losing teams. A prediction of which team will win is made based on the team cumulative team skill ratings (you could factor in DC/AFK into a revised expectation at the end of the match).
The outcomes are as follows:
1) Team that was expected to win the match wins
- winning team ratings go up a bit, losing team ratings drop
- change depends on how evenly matched the teams were.
2) Team that was expected to win the match loses
- winning team ratings go up more because they weren't expected to win, losing team ratings drop more
- amount of change depends on how mismatched the teams were expected to be
This applies to quick play solo queue - a separate rating is needed for group play since team work strongly affects the basic win/loss expectation.
You then repeat this process for every match that is completed. Each player's rating evolves over time depending on the long term statistical outcomes of the matches they participate in.
In your example of the Michael Jordan clone ... if they are playing 1:1 then luck is the dominant factor and they will each win and lose exactly the same amount over a large dataset ... they will have the same skill rating.
Lets throw these clones into matches with 4 other random folks on each team who also have ratings. If the ratings of these other players are randomly distributed then the two clones will STILL have a 50/50 chance of winning a match over the long term and will STILL end up with same rating. Some matches will be stomps one way and some another ... but over enough matches, as long as there is no bias in the selection of team mates then the rating that will be derived for the two Michael Jordan clones (and all the random other players over all their matches) will work out to the same values. This is because the "signal" element in every match is the rating of the specific player and the ratings for everyone else are noise from that perspective. In fact, there are 24 signals (the individual player ratings) that are being combined with luck (the noise element) to give a match outcome.
With enough data, all of the 24 signal values (the player ratings) can be determined.
This is the fundamental basis for a team based Elo player rating system and most folks don't seem to understand it because of the statistical nature. Individual match outcome and performance are completely irrelevant. Everyone has bad matches, everyone has good ones. It is the long term performance as reflected in how well the individual player achieves the game objectives (i.e. winning) with their team mates that is used to evolve a player rating.
--- SIgh ... I shouldn't have wasted my time typing all of the above
Here is a link to the TrueSkill ranking system developed by Microsoft Research. It is essentially exactly what I am discussing above.
https://www.microsof...ranking-system/
" The TrueSkill ranking system only uses the final standings of all teams in a game in order to update the skill estimates (ranks) of all gamers playing in this game."
From the FAQ:
A: The only information the TrueSkill ranking system will process is:
- Which team won?
- Who were the members of the participating teams?
Here is the average number of games required to identify a player skill level.
Game Mode Number of Games per Gamer 16 Players Free-For-All 3 8 Players Free-For-All 3 4 Players Free-For-All 5 2 Players Free-For-All 12 4 Teams/2 Players Per Team 10 4 Teams/4 Players Per Team 20 2 Teams/4 Players Per Team 46 2 Teams/8 Players Per Team 91
With 2 teams and 12 players per team it is probably significantly more games.
Anyway, the bottom line is that this type of rating system (or some variation) based on team outcomes is available and used in this industry. Any rating system based on numbers scored by individuals in a match actually negatively affects game play since it motivates players to get high scores rather than achieve in game objectives like winning.