Part of analyzing an NFL team’s passing game is measuring the distribution of passing stats among its receivers: targets, yards, touchdowns, etc. A good way to do this is looking at the percentage distribution to different receivers, i.e. the WR1 received 25 percent of receiving yards, the WR2 got 15 percent, and so on. This is useful when projecting stat lines for receivers, but doesn’t give us an easy way of comparing generally how concentrated one passing game is versus another.
A commenter at Chase Stuart’s Football Perspective gave him an idea from the finance world for calculating portfolio concentration, which Stuart used to conclude that teams are spreading it around more these days. The formula sums the squared passing yard ratios for each receiver to form a single number, or what Stuart calls the concentration index. The more spread out a passing game, the lower the concentration index, and vice versa.
I decided to dig into concentration index a bit more to see if we can learn anything about the benefits or costs of have a diversified passing game. (more…)
Depending on whom you listen to, you should expect the Rams to either increase or lessen the extent to which they use Todd Gurley in the passing game. At Rams minicamp, NFL.com’s Gregg Rosenthal witnessed Gurley seeing a lot of targets and often appearing to be the primary read. While others see the addition of receiving back Lance Dunbar and new head coach Todd McVay’s heavy passing-game usage of Chris Thompson in Washington logically leading to Gurley seeing fewer passes.
Gurley’s role in the passing game this season will also be highly dependent on his ability, in addition to the Rams’ personnel and passing scheme. Gurley saw decent, but not spectacular passing usage in college, and has carried that forward to the NFL.
A look at Gurley, Dunbar and league-wide efficiency
Here’s a closer look at how Gurley compares to the rest of the league – including Dunbar – in terms of passing volume and efficiency. (more…)
We’re into the summer doldrums of the NFL calendar, but got a small dose of excitement over the last week. Two top wide receivers are in limbo: Jeremy Maclin officially hit the free agent market last week, and has been making the rounds to potential suitors; and Eric Decker – initially expected to be released by the Jets – is now reportedly the subject of trade talks.
It’s fair to assume that the two receivers will cost roughly the same in terms of a new contract, as both are at, or approaching, 30 years old and have been productive – when healthy – throughout their careers. Decker isn’t a free agent, so his cost is currently higher in draft capital, although I’d put a low likelihood on the Jets gaining more than a late-round pick for his services.
The receivers share fairly similar box-score stats, both averaging around 70 yards per game over the last four years. But Decker has been a more dominate touchdown scorer. While box score stats are good at measuring the effect a receiver has when he is targeted and catches the ball, it doesn’t fully capture his influence on the entire offense. (more…)
I made an appearance on Ed Feng’s Football Analytics Show, which is associated with his fantastic content over at The Power Rank.
We discussed my research, how to find undervalued running back and wide receiver prospects, and my bold prediction for the 2017 NFL draft (that ended up coming true): A team would trade into the top-11 picks and select Patrick Mahomes before DeShaun Watson.
To listen on iTunes, click here.
Or visit Ed’s site for embedded audio.
I tend to ignore the specifics of most NFL mock drafts, but I was happy to see Rotoworld’s Josh Norris recognize the importance of sample size while recently predicting that the analytical wizards with the Cleveland Browns will prefer Deshaun Watson to Mitchell Trubisky come draft day.
(We should mention that everything written below regarding Watson’s larger sample also applies to Patrick Mahomes, who had more pass attempts than Watson at a similar yards per attempt.)
To those who are familiar with statistics generally, the concept that the Browns would want a larger sample for their potential franchise quarterback isn’t tremendously difficult to grasp. But it was still a pleasant surprise to see someone in the larger draft community weave this thinking into his analysis, especially when some mock drafts still forecast the Browns to make terrible strategic decisions from an analytical perspective, including taking a running back in the middle of the first round.
We know that bigger is better when it comes to sample size, and various models, including Football Outsiders’ QBASE – developed by now Browns’ senior strategist for player personnel Andrew Healy – have shown that quarterbacks with longer college careers are more likely to be successful in the NFL. But is there a way we can truly peel apart the analysis and see the nuts and bolts of why this is so? (more…)
We have roughly a month until the 2017 NFL draft, when we will learn where our favorite (or not so favorite) prospects will land this coming season. While draft position and landing spot are huge factors for forecasting the success of any running back prospect, I’ve found that we can accurately predict whether a running back will be successful largely based on his production profile and athletic measurables.
We know that collegiate production isn’t everything for wide receivers, it’s the only thing. For running backs, the situation is wholly different. Production matters, but size-adjusted speed is king for determining which running backs will be successful in the NFL.
You can define success many ways, but I’m choosing to use a top-12 fantasy point season (PPR) for running backs. The model’s dependent variable for early NFL success is whether or not a player had such a season within his first three years in the NFL.
We used age, production, and combine measurables to train and test the updated 2017 running back model. The model used 350 running back prospects that entered in the NFL from 2000-2014, splitting the data roughly 2-to-1 into training and testing sets.
After plugging dozens of different production and combine statistics into the model and slowly taking away, one-by-one the least statistically significant, we were left with four (two combine, two production) that provide the most explanatory and predictive power (listed in order of statistical significance): (more…)