2014 MLS Salaries Visualized

Yesterday was Christmas for a certain breed of MLS need. The MLS Players Union released 2014 player salaries for public consumption. The release does not reflect the most labyrinthine aspects of MLS wage rules, like retention funds, allocation money, etc., but I don’t know of another soccer league anywhere that has player wage data like this available mid-season.

I have delved into all the previous MLSPU salary releases before, and I strongly advise that people resist the urge to focus on specifics here. The odds that a player’s salary listed here it’s their exact salary cap cost fall somewhere on a spectrum between unlikely and impossible. Only the first 20 players on the roster count toward the cap, and the MLSPU release does not reflect allocation money, retention funds, former/lending clubs continuing to pay some wages, or special player statuses like Homegrown, Generation Adidas, or especially Designated Player. Almost feels appropriate that on the MLSPU website this file is listed as up to date through April Fools Day. However, we can get a good sketch of which clubs spend most, and of the wage disparities between the top and bottom of each club’s roster.

According to this release, yes, the guaranteed compensations of Clint Dempsey, Michael Bradley, Jermain Defoe, Landon Donovan, Robbie Keane, and Thierry Henry are higher than those for the full rosters of art least 12 of the 19 clubs in MLS. Those six (1.1% of players on club rosters) make 28.5% of the league’s full player wages. Also, the lowest salary reported, $36,500 made by 54 different players, is 0.5% of the highest, Dempsey’s $6,695,189.00. Of course, those players all bring in merchandising, ticket sales, and headlines that the grunts don’t. It would not be surprising if MLS and club accountants file some portion of the big player expenditures under marketing, instead of wages.

About a month ago I showed that total salaries have been a very poor predictor of league points going all the way back to the first MLSPU release in 2007. That’s not to say they are irrelevant, but their influence is overwhelmingly more subtle on the field than in big European leagues, which some have joked might as well be played on a balance sheet.

Figures like this are sure to be a major topic of conversation in upcoming collective bargaining negotiations between MLS and its Players Union. The current CBA expires after this season, and most of the expected points of contention are related in small or total ways to salary disparity. The players will want higher minimums, free agency, and a big boost to the salary cap, while the league will likely seek to maintain as much of the status quo as they can in the name of profitability and stability. Fans of the league would be well-served by becoming at least passingly familiar with the wage dynamics at play as MLS heads toward this critical juncture. Hopefully the above visual will clarify the issues for some.

Fewer and Fewer Clubs in Europe Have Anything Left to Play For

While there are some thrilling title, Champions League, and relegation races occurring in European football leagues, a majority of clubs are stuck in the middle of the table, with little real reason to push for results  over the rest of the season. Storied entities like Manchester United, Inter Milan, and AC Milan are among the 53% of sides in the Premiership, Bundesliga, La Liga, Seria A, and Eredivisie who seem certain to neither relegate nor qualify for the 2014-2015 UEFA Champions League via their final rank in the 2013-14 league table.

The tricky part here is defining the point at which a club fits this categorization. Technically, Tottenham could win out, reach 74 points, and while everyone above them would magically fall apart. Thus Spurs aren’t mathematically eliminated from taking the title (despite a 50-goal-difference gap with Liverpool, who already have 74), but in reality they are enormously unlikely to reach that point total, and even if they did it would probably only put them in the running for a Champions League spot, which is far from guaranteeing one.

I will simply look at the difference between each club’s points per game (PPG) thusfar and the PPG from here on they would need to reach their nearest point target, be it for their league’s title, last UCL spot, or relegation avoidance. Sound familiar? That’s because it is an extension of my interactive league tables. This is rather basic, which is why I am being rather liberal in setting my PPG change cutoff at 0.55. For example, West Bromwich Albion look rather safe on 32 points with a need for only 3 more points, but they are marked as “at risk of relegation” because their 1.0 PPG is just a hair too close to the 0.5 PPG needed over their last 6 fixtures. If you feel that 0.55 is too high or low, type in a different number between 0.2 and 1.0 in the interactive illustration below.

Alongside filters on this page, the categorization of clubs here also interacts with point targets on every other tab of the Dashboard (It will also update every time I fold future results into the Dashboard.). Lower the Premiership’s relegation point target to 34, flip back to Standing Rigidity, and you will see Norwich and WBA happily join the Pointless. Within that category, clubs like Man United and both Milanese superclubs, must find it disgraceful to be alongside clubs like Stoke, QPR, and Genoa who are delighted that they have nothing left to play for at this stage. That is the reality for an abundance of teams, though, left only to play spoiler to their rivals and evaluate their approach for next year. Add in the clubs already stuck in on either extreme of the table, and we see that 70% of Europe is simply playing out the string.

Admittedly, this analysis does ignore Europa League qualification, but don’t most fans do that anyway? Besides, Europa qualification via national cups and other criteria hold enough sway that I didn’t feel these spots were worth bothering to track in detail.

I don’t bring this up to taunt or praise, but largely to inform neutrals who want to decide which match to watch. Obviously, everyone knows that Manchester City’s visit to Anfield on Sunday will play a massive role in the eventual crowning of a Premiership champion, but the trick is usually in deciding between the mass of synchronized fixtures on Saturday. Palace-Villa and Stoke-Newcastle don’t hold any importance, and WBA-Spurs teeters on the edge on pointlessness. Fulham-Norwich is the top choice for real implications, and arguments could also be made for Sunderland-Everton and Southampton-Cardiff.

For the record, here are the members of the Pointless Plethora as things stand today, and I will check this list in a couple weeks to see if any rallied or swooned enough to alter their current station dramatically.

The Pointless Plethora as of the 8th of April, 2014.

The Pointless Plethora as of the 8th of April, 2014.

The biggest weakness of this analysis is that it does not adjust for strength of schedule. That can be very important, especially if you use simulations to inform wagering on outcomes. However, with so few matches left, I prefer to be concerned with general possibilities. Strange swings of (mis)fortune can be enormous factors over five-to-seven matches, defying even the smartest prognostications, so I say let’s concentrate on where teams really stand, focus on those in contention for important standings, and enjoy the ride.

Aiming for Improvement in Response to Criticism

Searching online for examples of bad data visualization is one of my guilty pleasures. Usually I am simply seeking a quick hit of schadenfreude, but a couple times I have found errors that would have been tempted to make myself, and learned from the experience. The other day I was hit with an entirely different reaction, when my most popular data creation (based on page views) popped up in my Google image search for “Worst Data Visualization Ever.”

Hickey - MLSWalter Hickey, a journalist presently writing for FiveThirtyEight, whose work I enjoy (particularly this on box office and gender coverage in Hollywood films), had included the above image and description of my MLS salary visualization in a column titled of The 27 Worst Charts of All Time, which posted on Business Insider last June. I was taken aback because while I knew the graph wasn’t the Platonic ideal of Tufte-ian visualization standards, within the MLS blogosphere, it had garnered myriad retweets, upvotes, and page views soon after I created it. I had received a few tweets seeking clarification, but I don’t remember any expressing utter confusion. As the above image is only part of the data visualization, it is unclear whether or not Hickey even saw the full interactive version. Also in the list were bad graphs published by the White House, Wall Street Journal, Human Rights Campaign, Bloomberg, Gallup, three from Fox News, etc. So at least I was in mostly good company…

Luckily, Hickey also included a link to Junk Charts, whose more constructive criticism had presumably alerted him to my work. Kaiser Fung’s assessment, titled “More power brings more responsibility,” noted that:

(1)Sorting the bars by total salary would be a start.

(2) The colors and subsections of the bars were intended to unpack the composition of the total salaries, namely, which positions took how much of the money. (3) I’m at a loss to explain why those rectangles don’t seem to be drawn to scale, or what it means to have rectangles stacked on top of each other. (4) Perhaps it’s because I don’t know much about how the cap works.

Combined with the smaller chart (shown below), the story seems to be that while all teams have similar cap numbers, the actual salaries being paid could differ by multiples.”

(numbers added by me)

There are valid criticisms here alongside an admitted knowledge gap. First off, Fung is absolutely right that I should have sorted the bars (1). That’s an inexcusable, silly oversight on my part. Also, the criticism of my coloring the segments of this tree graph by position (2) is valid, and I shouldn’t have let it become a perceived focus of the graph. A simple color scale based on player salary probably would have been much more effective.

The main takeaway I wanted from the graph was the massive disparity between the best- and worst-paid players in MLS, as well as a comparison of total club salaries. For some viewers, obviously the coloring by position distracted from that. (3) pointed me toward a minor issue with the graph that had escaped my attention. Tableau should draw the rectangles to scale, but there seems to be around a 0.1% error in rectangle sizing. The league’s minimum salary ($35,125) is 0.88% of Robbie Keane’s guaranteed compensation of $4,000,000 (16,555). From a screenshot of the chart, a minimum player’s rectangle ranged from 119 to 130 pixels, 0.72-0.78% of Keane’s 16,555 pixels. For me this is an acceptable level of imprecision since it allows me to chart both total spending and compare individual player salaries in a single view.
Taking these points into account, here’s my first draft of a re-design:

The major misunderstanding in Fung’s post rears up in his attempt to re-visualize my chart (bar and pie versions appear later in his post):

First, no soccer fan would ever order the positions: defender, forward, midfielder, goalkeeper, other. More seriously, lumping defender-midfielders and midfielder-forwards under a single “other” group is misleading as these two types of players are enormously different. Fung admitted earlier that he didn’t know how the cap works (4), and this contextual oddity probably came from a scant knowledge of the sport itself. Even without that issue, I think I have come up with a better way to improve upon on my positional and overall breakdown of each club’s spending:

Self-Assesment

In hindsight, I wasn’t thrilled with the graph when I created it, which is why I posted it on reddit/MLS instead of one of the blogs I was writing for at the time. I figured I would receive some constructive criticism, but instead it quickly passed 5,000 page views on well beyond (its most recent version passed 14,000 recently), the popularity became an end in and of itself. Whenever the MLS Players Union released updated salary information, I felt obligated to update the chart. Data visualization can be oriented toward page views or clarity/usefulness, and I fell into the common trap of feeling that if something is popular, it must have been well organized, despite plenty of evidence to the contrary.

Also, when posting something online, I need to be more cognizant of the chance that it will reach beyond its intended audience. MLS supporters were very open to a visualization of league’s salaries, and may have already had a good enough basic understanding of MLS wage dynamics that they saw past the chart’s oddities. I can sympathize with Hickey and Fung’s confusion, and I should have organized it in a way that was more universally understandable. I am thankful that both of them offered a critique, though I wish one of them (or any of the readers who saw my Twitter handle within the graph) had reached out to me at the time.

I always love to receive thoughtful constructive criticism. Please don’t hesitate to tell me if any of my work seems spurious or unclear, or if you see another website discussing my work. I have never claimed that anything I post is beyond reproach, far from it. I greatly value any opportunity to review a smart assessment of my analysis, data visualization, and writing.

Newly American Prospect Julian Green Projects to be a Top 100 Player

Last week, Bayern Munich wunderkind Julian Green committed to switching his national team allegiance from Germany to the United States. The switch was approved by FIFA on Monday, and on Wednesday it was unsurprisingly announced that Green has been called up by Jurgen Klinsmann for next week’s Mexico friendly. The reaction to this news has been interesting, as some fans trumpet the arrival of a perceived savior of US Soccer, while smart writers work to temper short-term expectations, dive into the process of wooing Green, or bring undertones of how nationality has and should be defined in these circumstances up to the surface.

Naturally, I looked for an analytic that might shed light on Green’s current and potential level of play. Unfortunately, as can be expected with an 18 year old, his statistics are sparse. ESPNFC had only his 3 Champions League minutes, Transfermarkt listed 28 matches’ worth of data (1870 minutes), and the best I could find was SoccerWay with 50 matches, covering 3,967 minutes.

Green’s stat-line in that largest resource seems impressive, with Green scoring 25 times, meaning he has 0.57 goals per 90 minutes, and this year alone with Bayern Munich II, he’s on an astronomic 0.81 pace. But how to adjust for playing a few years above his age bracket, while several tiers below the Bundesliga? The only USMNT comparative I could find was Terrence Boyd’s SoccerWay page. Boyd played 2,041 minutes for Hertha Berlin II up through 2011, while he was 20, scoring 0.62 goals per 90. Scale back to Green’s age and there’s only a miniscule sample of 2 goals over 429 minutes (0.41 G/90).

Thankfully, GoalImpact swooped in with their scoring of Green’s young career and how he projects to mature:

GreenFor those that haven’t heard of GoalImpact before, it’s a metric that basically takes the goal differential while each player is on the pitch during every match it can find, and aggressively adjusts that ± based on opposition strength, an aging curve, and other factors. So, we are now beyond Green’s individual scoring rate, and on to how much better his teams have performed when he has been facing older opposition, and where his likely talent peak lies. It’s a very interesting process, which I have grossly simplified, but you can read more about it here. The mind-bogglingly large data mine driving this thing has been used to track the Carter progressions of an absurdly high proportion of professional footballers worldwide.

Through Twitter exchanges, GoalImpact offered some useful context for his projection of Julian Green as an eventual worldwide top 100 player. The current rating and forecast come from data on 50 matches with 3,847 minutes. While GoalImpact defends his projection, he categorized it as “unsecure,” and would not give a confidence interval for it due to the possibility of injuries, etc. that can hamper young players’ development. He also identified Shawn Parker (another dual national who has yet to decide between Germany and the US) and Jack McBean as pretty good, but not top 100 prospects, for the US.

GoalImpact does have a good track record of young player prognostication. Recently backtesting of 2007 prospects identified by the metric and those players’ present accomplishments is of particular interest here. Impressive that it identified the likes of Gareth Bale & Gonzalo Higuain via only 2007 data, but even more so that it tagged relative unknowns at the time like Axel Witsel, Neven Subotic, Jonny Evans, and the Bender twins, Lars and Sven. To be fair, the list also includes Jozy Altidore and others who haven’t lived up to their early GoalImpact billing. However, the mass of 2007 prospects identified by the metric have become abundantly more valuable over the last seven years.

Nothing is guaranteed, but given Green’s Bayern pedigree and good ratings from both traditional and quantitative scouting, there’s a very respectable chance that a few years from now he will be the best player wearing whatever outfit Nike will have fashioned for the US National Team. His combination of club pedigree, and accomplishments at a young age separate him from such vaunted names as Tab Ramos (First USMNT cap at 22), Thomas Dooley (31), Earnie Stewart (21), and Roy Wegerle (28). Eventually passing any of those figures in the hierarchy of US Soccer heroes is a daunting goal, but Green’s ceiling goes well beyond that. Never before has a truly promising young player valued by a national team as powerful as Germany’s chosen the USA instead.

For now, we should all be excited to see what the kid can do in friendlies, particularly the meeting with Mexico next week. Many want to talk about whether Green can or should make the World Cup team. As Alexi Lalas has been saying, there are legitimate chemistry issues to consider, because the last thing this team needs is a repeat of the 1998 fiasco with David Regis replacing Jeff Agoos at the last minute.To his credit, Green (and his father) are saying the right things, and he seems to be getting along with his new teammates.

Plus, a player of Green’s prospective skills seems well-suited for a bench role on this squad. It would be shocking to see him pass Landon Donovan, Clint Dempsey, Graham Zusi, and Alejandro Bedoya in the next two months, but Klinsmann probably wants at least one more midfielder who can bring something to the attack, and none of his other options in Brad Davis, Brek Shea, Sacha Kljestan, Joe Carona, Mix Diskerud, and Jose Torres seem to have booked their ticket to Brazil yet. Klinsmann won’t need to bring Green along for cap tie purposes, his one-time switch already means he is permanently a US player, so such an assignment would have to be merit-based. To win one of the last roster spots in Brazil, Green should need to prove his worth on the field, and win over the other players in the locker room, both in short order. That is a lot to ask, but Julian Green has the opportunity to deliver, and his chances of succeeding have to be respected.

MLS Goal of the Year Dashboard

I’ve also made an alternate version of the following chalkboard sized for use on an ipad. Its extra filters and larger circles makes it a little easier to use on smaller touchscreens, too.

Last month’s OptaPro Analytics Forum was built around analyses of soccer data that were all quite thought-provoking. Opta’s presentation of their professional tool, VideoHub Elite, was just as intriguing as some of the research, though. As a service to their partner clubs, they provide essentially an unbound version of their MLS Chalkboards. You can use filters to see every action a player of your choosing performed in any match which Opta collected data for.

Clubs who want to see Lionel Messi’s greatest hits can simply filter to his name, then goals, assists, dribbles, or any other action type, then click event locations to get a reel of brilliance. The MLS nerd in me swung the other direction entirely, and had the Opta rep pull up Fabian Castillo’s plethora of unsuccessful dribbles, and witnessed the young Colombian failing to get past an opponent while less than 30 yards from his own keeper.

I was thoroughly impressed (with the tool, not Castillo’s decision-making), but for awhile my only lingering thought about it was jealousy of club analysts who get to use VideoHub Elite every day. But it struck me recently that I could build a smaller-scale version in Tableau. Thanks to the MLS website on embed codes for their 2013 Goal of the Year nominees, and a huge assist from Opta on shot locations and other data for these 64 tallies, I was able to produce the following:

By embedding video within the dashboard, this goes beyond the standard visualization options in Tableau, but that’s part of the beauty of this software. If you get an idea, just google “___ in Tableau,” and often you will find someone has a way to make it work. In my case “embed video in Tableau” yielded DataRemixed’s guide to embedding YouTube videos. I tried to push this a bit by embedding some videos from MLSsoccer.com, but please let me know if only the YouTube videos work for you. I can probably find Youtube alternative in most cases.

The rest of this dashboard came from applications of scatterplots, filters, and background images that I have employed before. At some point I may walk through my process of building this dashboard step by step. Let me know if you would find that interesting.

I am entering this Dashboard, my visual league tables, and some of my previous work in Tableau’s Elite 8 Sports Viz Contest. If one of them becomes a finalist I will definitely write a post next week re-presenting that viz, and possibly enhancing it in some way.

A Better Method for Predicting the 2014 MLS Season

As the kickoff of any league’s season approaches, everyone wants to talk about predictions. Unfortunately, few in the media have any clear method behind their prognostications. A pundit’s expected league table (or power ranking, or whatever they call it) is usually little more than the previous year’s final ranking of teams, with some adjustments made based on how much the author (dis)liked clubs’ offseasons. There are absolutely media types who take a detailed approach, Matt Doyle for example, but they are the exceptions that prove the rule.

Instead of the 2013 MLS final standings, I will be using 2013 expected goal differentials (xGD) as my base for 2014 prognostication. From each club’s xGD, I’ve determined where the league-wide regression line suggests their natural point total lies. My MLS predictions aren’t entirely divergent from the pundits’ approach, though, as I will make some qualitative adjustments based on a variety of factors. Also, instead of simply ranking teams or slapping a single point target on each club, I will present my predictions as a range of likely outcomes. Continue reading

MLS Oligarchy? Not if the Past is Any Indication.

With the importation of Clint Dempsey, Michael Bradley, Jermaine Defoe, and other high-cost players over the last seven months, there has been a lot of talk lately about MLS transitioning from parity to oligarchy. In this view, clubs in LA, New York, Seattle, and Toronto are destined to lord over any who don’t keep pace with the extravagant salaries paid to their stars. Clubs with large salaries are not a new phenomenon in America, though, and there is little evidence of dominance resulting from them. Continue reading

MLS and NHL: Attendance on the Fringes of “Major Sports”

“Four major sports.”
Say that phrase amidst American soccer fans, and you will soon hear en masse gnashing of teeth. Select Major League Soccer supporters’ we-get-no-respect hackles always raise whenever the sports media implicitly or explicitly questions the first word in the league’s name.
Silly as such fits may be, soccer is gaining on the sports establishment. While MLS isn’t as popular as the NFL, NBA, or MLB, is it truly behind the NHL?

Hockey tradition seems the NHL’s biggest advantage here, with the New York Rangers the sole Original Six member outdrawn by their local MLS club. Meanwhile Seattle, behind 40 years of Sounders tradition, draw more combined fans than all cities but LA, without any NHL help. Average attendance isn’t a perfect popularity presentation, being ignorant of game volume, TV ratings, etc. Wider investigation may be warranted, but doing so by sport rather than league would likely favor soccer.

The Next Pirlo? Visually Comparing Central Midfielders

Comparing footballers using only stats can be difficult, and different approaches have to be taken to accommodate the type of player being evaluated. Forwards have traditionally been “analyzed” (even by the math-phobic) based on goals, though this has led to some Andy-Carrol-sized problems (better to look at their expected goals per 90, as 11tegen11 showed recently). Some evaluate keepers on save percentage, but that too can be quite flawed.

Even more difficult to evaluate than those that make their money in the penalty area are the men in the center of the park, whose contributions have been completely unquantifiable until recently. After all, goals are really the only player data that was universally recorded even 10 years ago. Maybe you’d find assists, cards, or minutes, but none of those are going to say a great deal about a central midfielder’s achievements.

At the OptaPro Analytics Forum, Marek Kwiatkowski used 2012/13 Opta passing data in an attempt to compare central midfielders in the Premiership, Bundesliga, La Liga, Serie A, and Ligue 1. This is the kind of creative analysis that needs to happen in order to set us on the road toward quantitatively identifying the next Pirlo/Xavi/Gerrard/etc.

The foundation of Kwiatkoski's measures of Mikel Arteta's passing angles and volume.
Kwiatkowski’s diagram of Mikel Arteta’s passing. The segments are foundational to his pass length and volume dissimilarity scores.

I found Kwiatkowski’s analysis interesting, and potentially important.  First he segmented the 360 degrees on offer for passes into 16 sections, as seen above. From there each players’ passing work was broken down in three ways.

  1. Distance. Within each angle segment, what were the player’s pass length tendencies?
  2. Volume. How often did he pass in certain directions?
  3. Position. Breaking away from the passes’ outcome, where was he passing from, on average?
Kwiatkowski’s GIF illustrating his technique for categorizing player passing location.

For all three categories, he calculated dissimilarity between all pairings of the 137 central midfielders in his data set, then tallied them together, with a lesser weighting for the position score, to get an overall player dissimilarity measure. The smaller the number, the more similar the players.

I’ve grossly over-simplified the analysis, and for greater detail on methodology, check out Kwiatkowski’s own article on Statsbomb about his study. At the Opta Forum I came away impressed, but felt that the visualization Marek used to display his findings could be improved greatly with some work in Tableau.

Thankfully, Marek agreed and sent me his dissimilarity scores, allowing me to build an interactive Dashboard that could switch between every central midfielder he analyzed.

Please take advantage of all the interactivity at your disposal here. First, the dropdown menu allows you to choose any single player, or all of them at once. The filters are most useful when seeking signal within the noise of all 9,316 pairings. Also, if you select a pairing, or group of pairings, in either graph, it will act as a filter for the other graph.

Looking over some of the most similar pairs can be instructive, but keep in mind that this model is built only on passing. Differences in Mikel Arteta and Yaya Toure’s shooting, tackling, etc. won’t be reflected here. Same for Xavi and Thiago Alcántara or Marouane Fellaini and Isco.

There are certainly other limits to this kind of analysis, but most are similar to those that linger after traditional scouting. Few can make more than educated guesses at what is required of individual players tactically. Also, further study would be required to see if players’ passing tendencies are consistent year-to-year, with those who change clubs being of particular interest.

Despite those caveats, the questions this analysis raises are quite tantalizing. If you were to replace one of these with their closest similarity, would their passing patterns be familiar to their new teammates? Theoretically, could teammates adjust to non-distribution differences easier?

Smart analysis is seldom about giving an absolute, irrefutable answer, but instead it aims to offer knowledge that can push toward smarter decisions. Kwiatkowski’s analysis can simply help us compare central midfielders in a smarter way, and that has value.

The Improved EPL Table

Edit: Updated through matches played on the 8th of March. Also, Tables for the Bundesliga, La Liga, Serie A, and Eredivisie available in tabs of the visualization. All of these tables will be updated at least weekly both here and  on the Tables page (link always in the site’s header).

League tables usually have pronounced shortcomings. The pertinent information is there, but you have to look closely or run some simple calculations to see the things that clubs and fans should most care about.

Raw order is generally misleading because there are usually a few teams huddled close together, while other adjacent clubs have dramatically different records. For example, right now in the Premiership, Newcastle sits in ninth place on 37 points through 26 matches, while tenth through fourteenth all have either 28 or 27 points. Yet many fans glance at only that order and assume that Swansea is as close to Newcastle as Stoke is to Hull. If you are reading this after the table has changed, here’s an image of my table next to one from ESPNFC as of February 15th, 2014.

Meanwhile the most pressing concern of all teams should be where they can and will end their season. My main goals in this project are to simply and quickly display 1) clubs’ true position relative to each other and 2) what they will need to do over the rest of the season to achieve certain goals.

Points Per Game (PPG) is the simplest way to simultaneously display both of these. Just take point divided by matches played for the current standings, and points needed to reach a point target divided by fixtures remaining for that club. Math doesn’t get much simpler, but below we can see all of the figures for all of the clubs very quickly.

The point targets are not set in a terribly scientific manner, but they are adjustable. If a race is close, I just round up the leader’s full season point pace. If there’s a significant PPG gap, I set the point target at a point in between the teams on either side of the divide.

So if you have an analysis, or a gut feeling, that tells you that a particular target is too high or low, please click the arrows or type in your preferred new number. The graph and the hover-over data windows for each team will adjust immediately. If you want to use this to illustrate your own analysis, take a screenshot after adjusting targets and post it on your own blog, but please include a link to StatHunting.com

The hover-over window for each club includes not only specific PPG targets, but also other pertinent information like goal difference and home/away matches remaining.

Going forward I will update this at least once per week, and I hope to build similar treatments for other leagues, too. The updates will post automatically to wherever the interactive version of this table is online.