Archive for June, 2017

It’s quite simple actually.

Apropos to the myriad articles and discussions about the run scoring and HR surge starting in late 2015 and continuing through 2017 to date, I want to go over what can cause league run scoring to increase or decrease from one year to the next:

  1. Changes in equipment, such as the ball or bat.
  2. Changes to the strike zone, either the overall size or the shape.
  3. Rule changes.
  4. Changes in batter strength, conditioning, etc.
  5. Changes in batter or pitcher approaches.
  6. Random variation.
  7. Weather and park changes.
  8. Natural variation in player talent.

I’m going to focus on the last one, variation in player talent from year to year. How does the league “replenish” it’s talent from one year to the next? Poorer players get less playing time, including those who get no playing time at all (retired, injured, or switch to another league). Better players get more playing time and new players enter the league. Much of that is because of the aging curve. Younger players generally get better and thus amass more playing time and older players get worse, playing less – eventually retiring or released.  All these moves can lead to each league having a little more or less overall talent and run scoring than in the previous year. How can we measure that change in talent/scoring?

One good method is to look at how a player’s league normalized stats change from year X to year X+1. First we have to establish a base line. To do that, we track the average change in some league normalized stat like Linear Weights, RC+ or wOBA+ over many years. It is best to confine it to players in a narrow age range, like 25 to 29, so that we minimize the problem of average league age being different from one year to the next, and thus the amount of decline with age also being different.

We’ll start with batting. The stat I’m using is linear weights, which is generally zeroed out at the league level. In other words, the average player in each league, NL and AL separately, has a linear weights of exactly zero. If we look at the average change from 2000 to 2017 for all batters from 25 to 29 years old, we get -.12 runs per team per game in the NL and -.10 in the AL. That means that either these players decline with age and/or every year the quality of the league’s batting gets better. We’ll assume that most of that -.12 runs is due to aging (and that peak age is close to 25 or 26, which it probably is in the modern era), but it doesn’t matter for our purposes.

So, for example, if in year X to X+1 in the NL, all batters age 25-29 lost -.2 runs per game per team, what would that tell us? It would tell us that league batting in year X+1 was better than in year X by .1 runs per team per game. Why is that? If players should lose only -.1 runs but they lost -.2 runs, and thus they look worse than they should relative to the league as a whole, that means that the league got better.

Keep in mind that the quality of the pitching has no effect on this method. Whether the overall pitching talent changes from year 1 to year 2 has no bearing on these calculations. Nor do changes in parks, differences in weather, or any other variable that might change from year to year and affect run scoring and raw offensive stats. We’re using linear weights, which is always relative to other batters in the league. The sum of everyone’s offensive linear weights in any given year and league is always zero.

Using this method, here is the change in batting talent from year to year, in the NL and AL, from 2000 to 2017. Plus means the league got better in batting talent. Minus means it got worse. In other words, a plus value means that run scoring should increase, everything else being the same. Notice the decline in offense in both leagues from 2016 to 2017 even though we see increased run scoring. Either pitching got much worse or something else is going on. We’ll see about the pitching.

Table I

Change in batting linear weights, in runs per game per team

Years NL AL
00-01 .09 -.07
01-02 -.12 -.23
02-03 -.15 -.11
03-04 .09 -.11
04-05 -.10 -.14
05-06 .15 .05
06-07 .09 .08
07-08 -.05 .08
08-09 -.13 .08
09-10 .17 -.12
10-11 -.18 .04
11-12 .12 0
12-13 -.03 -.05
13-14 .01 .07
14-15 .06 .09
15-16 .01 .05
16-17 -.03 -.12

 

Here is the same chart for league pitching. The stat I am using is ERC, or component ERA. Component ERA takes a pitcher’s raw rate stats, singles, doubles, triples, home runs, walks, and outs, per PA, park and defense adjusted, and converts them into a theoretical runs per 9 inning, using a BaseRuns formula. Like linear weights, it is scaled to league average. A plus number means that league pitching got worse, and hence run scoring should go up.

Table II

Change in pitching, in runs per game per team

Years NL AL
00-01 .02 .21
01-02 .03 .00
02-03 -.04 -.23
03-04 .07 .11
04-05 .00 .07
05-06 -.14 -.12
06-07 .10 .06
07-08 -.15 -.10
08-09 -.13 -.17
09-10 .01 .04
10-11 .03 .16
11-12 .03 -.06
12-13 -.02 .26
13-14 -.02 -.04
14-15 .06 -.02
15-16 .03 .04
16-17 .04 -.01

 

Notice that pitching in the NL got a little worse. Overall, when you combine pitching and batting, the NL has worse talent in 2017 compared to 2016, by .07 runs per team per game. NL teams should score .01 runs per game more than in 2016, again, all other things being equal (they usually are not).

In the AL, while we’ve seen a decrease in batting of -.12 runs per team per game (which is a lot), we’ve also seen a slight increase in pitching talent, .01 runs per game per team. We would expect the AL to score .13 runs per team per game less in 2017 than in 2016, assuming nothing else has changed. The overall talent in the AL, pitching plus batting, decreased by .11 runs.

The gap in talent between the NL and AL, at least with respect to pitching and batting only (not including base running and defense, which can also vary from year to year) has presumably moved in favor of the NL by .04 runs a game per team, despite the AL’s .600 record in inter-league play so far this year compared to .550 last year (one standard deviation of the difference between this year’s and last year’s inter-league w/l record is over .05, so the difference is not even close to being statistically significant – less than one SD).

Let’s complete the analysis by doing the same thing for UZR (defense) and UBR (base running). A plus defensive change means that the defense got worse (thus more runs scored). For base running, plus means better (more runs) and minus means worse.

Table III

Change in defense (UZR), in runs per game per team

Years NL AL
00-01 .01 -.07
01-02 -.01 .05
02-03 .18 -.07
03-04 .10 .03
04-05 .12 .00
05-06 -.08 -.07
06-07 .02 .03
07-08 .04 .01
08-09 -.02 -.02
09-10 -.01 -.02
10-11 .15 -.04
11-12 -.10 -.07
12-13 -.02 .03
13-14 -.10 .03
14-15 -.02 -.02
15-16 -.07 -.05
16-17 -.06 .05

 

From last year to this year, defense in the NL got better by .06 runs per team per game, signifying a decrease in run scoring. In the AL, the defense appears to have gotten worse, by .05 runs a game. By the way, since 2012, you’ll notice that teams have gotten much better on defense in general, likely due to an increased awareness of the value of defense, and the move away from the slow, defensively-challenged power hitter.

Let’s finish by looking at base running and then we can add everything up.

Table IV

Change in base running (UBR), in runs per game per team

Years NL AL
00-01 -.02 -.01
01-02 -.02 -.01
02-03 -.01 .00
03-04 .00 -.04
04-05 .02 .02
05-06 .00 -.01
06-07 -.01 -.01
07-08 .00 .00
08-09 .02 .02
09-10 -.02 -.02
10-11 .04 -.01
11-12 .00 -.02
12-13 -.01 -.01
13-14 .01 -.01
14-15 .01 .05
15-16 .01 -.03
16-17 .01 .01

 

Remember that the batting and pitching talent in the AL presumably decreased by .11 runs per team per game and they were expected to score .13 fewer runs per game per team, in 2017, as compared to 2016. Adding in defense and base running, those numbers are a decrease in AL talent by .15 runs and a decrease in run scoring of only .07 runs per team per game.

In the NL, when we add defense and base running to batting and pitching, we get no overall change in talent, from 2016 to 2017, and a decrease in run scoring of -.04.

We also see a slight trend towards better base running since 2011, which should naturally occur with better defense.

Here is everything combined into one table.

Table V

Change in talent and run scoring, in runs per game per team. Plus means gain in talent and score more runs.

Years NL Talent AL Talent NL Runs AL Runs
00-01 .04 -.22 .09 .06
01-02 -.16 -.29 -.12 -.19
02-03 -.30 .19 -.02 -.41
03-04 -.08 -.29 .26 -.01
04-05 -.20 -.19 .04 -.05
05-06 .37 .23 -.07 -.15
06-07 -.02 -.02 .23 .16
07-08 .06 .17 -.16 -.01
08-09 .04 .29 -.26 -.09
09-10 .15 -.16 .05 -.12
10-11 -.31 -.09 .04 .15
11-12 .19 .11 .05 -.15
12-13 0 -.35 -.08 .23
13-14 .14 .07 -.10 .05
14-15 .03 .18 .11 .10
15-16 .06 .03 -.02 .03
16-17 0 -.15 -.04 -.07
Advertisements

If you haven’t read it, here’s the link.

For MY ball tests, the difference I found in COR was 2.6 standard deviations, as indicated in the article. The difference in seam height is around 1.5 SD. The difference in circumference is around 1.8 SD.

For those of you a little rusty on your statistics, the SD of the difference between two sample means is the square root of the sum of their respective variances.

The use of statistical significance is one of the most misunderstood and abused concepts in science. You can read about this on the internet if you want to know why. It has a bit to do with frequentist versus Bayesian statistics/inference.

For example, when you have a non-null hypothesis going into an experiment, such as, “The data suggest an altered baseball,” then ANY positive result supports that hypothesis and increases the probability of it being true, regardless of the “statistical significance of those results.”

Of course the more significant the result, the more we increase the prior probability. However, the classic case of using 2 or 2.5 SD to define “statistical significance” really only applies when you start out with the null hypothesis. In this case, for example, that would be if you had no reason to suspect a juiced ball, and you merely tested balls just to see if perhaps there were differences. In reality, you almost always have a prior P which is why the traditional concept of accepting or rejecting the null hypothesis based on the statistical significance of the results of your experiment is an obsolete concept.

In any case, from the results of MLB’s own tests, in which they tested something like 180 balls a year, the seam height reduction we found was something like 6 or 7 SD and the COR increase was something like 3 or 4 SD. We also can add to the mix, Ben’s original test whereby he found an increase in COR of .003 or around 60% of what I found.

So yes, the combined results of all three tests are almost unequivocal evidence that the ball was altered. There’s not much else you can do other than to test balls. Of course the ball testing would mean almost nothing if we didn’t have the batted ball data to back it up. We do.

I don’t think this “ball change” was intentional by MLB, although it could be.

In my extensive research for this project, I have uncovered two things:

One, there is quite a large actual year to year difference in the construction of the ball which can and does have a significant impact on HR and offensive rates in general. The concept of a “juiced” (or “de-juiced”) ball doesn’t really mean anything unless it is compared to some other ball – for example, in our case, 2014 to 2016/2017.

Two, we now know because of Statcast and lots of great work and insight by Alan Nathan and others, that very small changes in things like COR, seam height, and size can have a dramatic impact on offense. My (wild) guess is that we probably have something like a 2 or 3 feet (in batted ball distance for a typical HR trajectory) variation (one SD) from year to year based on the (random) fluctuating composition and construction of the ball.  And from 2014 to 2106 (and so far this year), we just happened to have seen a 2 or 3 standard deviation variation.

We’ve seen it before, most notably in 1987, and we’ll probably see it again. I have also altered my thinking about the “steroid era.” Now that I know that balls can fluctuate from year to year, sometimes greatly, it is entirely possible that balls were constructed differently starting in 1993 or so – perhaps in combination with burgeoning and rampant PED use.

Finally, it is true that there are many things that can influence run scoring and HR rates, some more than others. Weather and parks are very minor. Even a big change in one park or two or a very hot or cold year will have very small effects overall. And of course we can easily test or account for these things.

Change in talent can surprisingly have a large effect on overall offense. For example, this year, the AL lost a lot of offensive talent which is one reason why the NL and the AL have almost equal scoring despite the AL having the DH.

The only other thing that can fairly drastically change offense is the strike zone. Obviously it depends on the magnitude of the change. In the pitch f/x era we can measure that, as Joe Roegele and others do every year. It has not changed much the last few years until this year. It is smaller now, which is causing an uptick in offense from last year. I also believe, as others have said, that the uptick since last year is due to batters realizing that they are playing with a livelier ball and thus are hitting more air balls. They may be hitting more air balls even without thinking that the ball is juiced -they may be just jumping on the “fly-ball bandwagon.” Either way, hitting more fly balls compounds the effect of a juiced ball because it is correct to hit more fly balls.

Then there is the bat, which I know nothing about. I have not heard anything about the bats being different or what you can do to a bat to increase or decrease offense, within allowable MLB limits.

Do I think that the “juiced ball” (in combination with players taking advantage of it) is the only reason for the HR/scoring surge? I think it’s the primary driver, by far.