Archive for the ‘Game Theory’ Category

There’s an article up on Fangraphs by Eno Saris that talks about whether the pitch to Justin Turner in the bottom of the 9th inning in Game 2 of the 2017 NLCS was the “wrong” pitch to throw in that count (1-0) and situation (tie game, runners on 1 and 2, 2 outs) given Turner’s proclivities at that count. I won’t go into the details of the article – you can read it yourself – but I do want to talk about what it means or doesn’t mean to criticize a pitcher’s pitch selection – on one particular pitch, and how pitch selection even works, in general.

Let’s start with this – the basic tenet of pitching and pitch selection: Every single situation calls for a pitch frequency matrix. One pitch is chosen randomly from that matrix according to the “correct” frequencies. The “correct” frequencies are those which result in the exact same “result” (where result is measured by the win expectancy impact of all the possible outcomes combined).

Now, obviously, most pitchers “think” they’re choosing one specific pitch for some specific reason, but in reality since the batter doesn’t know the pitcher’s reasoning, it is essentially a random selection as far as he is concerned. For example, a pitcher throws an inside fastball to go 0-1 on the batter. He might think to himself, “OK, I just threw the inside fastball so I’ll throw a low and away off-speed to give him a ‘different look.’ But wait, he might be expecting that. I’ll double up with the fastball! Nah, he’s a pretty good fastball hitter. I’ll throw the off-speed! But I really don’t want to hang one on an 0-1 count. I’m not feeling that confident in my curve ball yet. OK, I’ll throw the fastball, but I’ll throw it low and away. He’ll probably think it’s an off-speed and lay off of it and I’ll get a called strike, or he’ll be late if he swings.”

As you can imagine, there are an infinite number of permutations of ‘reasoning’ that a pitcher can use to make his selection. The backdrop to his thinking is that he knows what tends to be effective at 0-1 counts in that situation (score, inning, runners, outs, etc.) given his repertoire, and he knows the batter’s strengths and weaknesses. The result is a roughly game theory optimal (GTO) approach which cannot be exploited by the batter and is maximally effective against a batter who is thinking roughly GTO too.

The optimal pitch selection frequency matrix is dependent on the pitcher, the batter, the count and the game situation. In that situation with Lackey on the mound and Turner at the plate, it might be something like 50% 4-seam, 20% sinker, 20% slider, and 10% cutter. The numbers are irrelevant. Then a random pitch is selected according to those frequencies, where, for example, the 4-seamer is chosen twice as often as the sinker and slider, the sinker and slider twice as often as the cutter, etc.

Obviously doing that even close to accurately is impossible, but that’s essentially what happens and what is supposed to happen. Miraculously, pitchers and catchers do a pretty good job (really you just have to have a pretty good idea as to what pitches to throw, adjusted a little for the batter). At least I presume they do. It is likely that some pitchers and batters are better than others at employing these GTO strategies as well as exploiting opponents who don’t.

The more a batter likes (or dislikes) a certain pitch (in that count or overall), the less that pitch will be thrown. In order to understand why, you must understand that the result of a pitch is directly proportional to the frequency at which it is thrown in a particular situation. For example, if Taylor is particularly good against a sinker in that situation or in general, it might be thrown 10% rather than 20% of the time. The same is true for locations of course, which makes everything quite complex.

Remember that you cannot tell what types and locations of pitches a batter likes or dislikes in a certain count and game situation from his results! This is a very important concept to understand. The results of every pitch type and location in each count, game situation, and versus each pitcher (you would have to do a “delta method” to figure this) are and should be exactly the same! Any differences you see are noise – random differences (or the result of wholesale exploitative play or externalities as I explain below). We can easily prove this with an example.

Imagine that in all 1-0 counts, early in a game with no runners on base and 0 outs (we’re just choosing a ‘particular situation’ – which situation doesn’t matter), we see that Turner gets a FB 80% of the time and a slider 20% of the time (again, the actual numbers are irrelevant). And we see that Turner’s results (we have to add up the run or win value of all the results – strike, ball, batted ball out, single, double, etc.) are much better against those 80% FB than the 20% SL. Can we conclude that Turner is better against the FB in that situation?

No! Why is that? Because if we did, we would HAVE TO also conclude that the pitchers were throwing him too many FB, right? They would then reduce the frequency of the fastball. Why throw a certain pitch 80% of the time (or at all, for that matter) when you know that another pitch is better?

You would obviously throw it less often than 80% of the time. How much less? Well, say you throw it 79% and the slider 21%. You must be better off with that ratio (rather than 80/20) since the slider is the better pitch, as we just said for this thought exercise. Now what if the FB still yields better results for Turner (and it’s not just noise – he’s still better versus the FB when he knows it’s coming 79% of the time)? Well, again obviously, you should throw the FB even less often and the slider more often.

Where does this end? Every time we decrease the frequency of the FB, the batter gets worse at it since it’s more of a surprise. Remember the relationship between the frequency of a pitch and its effectiveness. At the same time, he gets better and better at the slider since we throw it more and more frequently. It ends at the point in which the results of both pitches are exactly equal. It HAS to. If it “ends” anywhere else, the pitcher will continue to make adjustments until an equilibrium point is reached. This is called a Nash equilibrium in game theory parlance, at which point the batter can look for either pitch (or any pitch if the GTO mixed strategy includes more than two pitches) and it won’t make any difference in terms of the results. (If the batter doesn’t employ his own GTO strategy, then the pitcher can exploit him by throwing one particular pitch – in which case he then becomes exploitable, which is why it behooves both players to always employ a GTO strategy or risk being exploited.) As neutral observers, unless we see evidence otherwise, we must assume that all actors (batters and pitchers) are indeed using a roughly GTO strategy and that we are always in equilibrium. Whether they are or they aren’t, to whatever degree and in whichever situations, it certainly is instructive for us and for them to understand these concepts.

Assuming an equilibrium, this is what you MUST understand: Any differences you see in either a batter’s results across different pitches, or as a pitcher’s, MUST be noise – an artifact of random chance. Keep in mind that it’s only true for each subset of identical circumstances – the same opponent, count, and game situation (even umpire, weather, park, etc.). If you look at the results across all situations you will see legitimate differences across pitch types. That’s because they are thrown with different frequencies in different situations. For example, you will likely see better results for a pitcher with his secondary pitches overall simply because he throws them more frequently in pitcher’s counts (although this is somewhat offset by the fact that he throws them more often against better batters).

Is it possible that there are some externalities that throws this Nash equilibrium out of whack? Sure. Perhaps a pitcher must throw more FB than off-speed in order to prevent injury. That might cause his numbers for the FB to be slightly worse than for other pitches. Or the slider may be particularly risky, injury-wise, such that pitchers throw it less than GTO (game theory optimally) which results in a result better (from the pitcher’s standpoint) than the other pitches.

Any other deviations you see among pitch types and locations, by definition, must be random noise, or, perhaps exploitative strategies by either batters or pitchers (one is making a mistake and the other is capitalizing on it). It would be difficult to distinguish the two without some statistical analysis of large samples of pitches (and then we would still only have limited certainty with respect to our conclusions).

So, given all that is true, which it is (more or less), how can we criticize a particular pitch that a pitcher throws in one particular situation? We can’t. We can’t say that one pitch is “wrong” and one pitch is “right” in ANY particular situation. That’s impossible to do. We cannot evaluate the “correctness” of a single pitch. Maybe the pitch that we observe is the one that is only supposed to be thrown 5 or 10% of the time, and the pitcher knew that (and the batter was presumably surprised by it whether he hit it well or not)! The only way to evaluate a pitcher’s pitch selection strategy is by knowing the frequency at which he throws his various pitches against the various batters in the various counts and game situations. And that requires an enormous sample size of course.

There is an exception.

The one time we can say that a particular pitch is “wrong” is when that pitch is not part of the correct frequency matrix at all – i.e., the GTO solution says that it should never be thrown. That rarely occurs. About the only time that occurs is on 3-0 counts where a fastball might be the only pitch thrown (for example, 3-0 count with a 5 run lead, or even a 3-1 or 2-0 count with any big lead, late in the game – or a 3-0 count on an opposing pitcher who is taking 100% of the time).

Now that being said, let’s say that Lackey is supposed to throw his cutter away only 5% of the time against Turner. If we observe only that one pitch and it is a cutter, Bayes tells is that there is an inference that Lackey was intending to throw that pitch MORE than 5% of the time and we can indeed say with some small level of certainty that he “threw the wrong pitch.” We don’t really mean he “threw the wrong pitch.” We mean that we think (with some low degree of certainty) he had the wrong frequency matrix in his head to some significant degree (maybe he intended to throw that pitch 10% or 20% rather than 5%).*

So, the next time you hear anyone say what a pitcher should be throwing on any particular pitch or that the pitch he threw was “right” or “wrong,” it’s a good bet that he doesn’t really know what he’s talking about, even if they are or were a successful major league pitcher.

* Technically, we can only say something like, “We are 10% sure he was thinking 5%, 12% sure he was thinking 7%, 13% sure he was thinking 8%, etc.” – numbers for illustration purposes only.

Advertisements

There’s been some discussion lately on Twitter about the sacrifice bunt. Of course it is used very little anymore in MLB other than with pitchers at the plate. I’ll spare you the numbers. If you want to verify that, you can look it up on the interweb. The reason it’s not used anymore is not because it was or is a bad strategy. It’s simply because there is no point in sac bunting in most cases. I’ve written about why before on this blog and on other sabermetric sites. It has to do with game theory. I’ll briefly explain it again along with some other things. This is mostly a copy and paste from my recent tweets on the subject.

First, the notion that you can analyze the efficacy (or anything really) about a sac bunt attempt by looking at what happens (say, the RE or WE) after an out and a runner advance is ridiculous. For some reason sabermetricians did that reflexively for a long time ever since Palmer and Thorn wrote The Hidden Game and concluded (wrongly) that the sac bunt was a terrible strategy in most cases. What they meant was that advancing the runner in exchange for an out is a terrible strategy in most cases, which it is. But again, EVERYONE knows that that isn’t the only thing that happens when a batter attempts to bunt. That’s not a shock. We all know that the batter can reach base on a single or an error, he can strike out, hit into a force or DP, pop out, or even walk. We obviously have to know  how often those things occur on a bunt attempt to have any chance to figure out whether a bunt might increase, decrease or not change the RE or WE, compared to hitting away. Why Palmer and Thorn or anyone else ever thought that looking at the RE or WE after something that occurs less than half the time on a bunt attempt (yeah, on the average an out and runner advance occurs around 47% of the time) could answer the question of whether a sac bunt might be a good play or not, is a mystery to me. Then again, there are probably plenty of stupid things we’re saying and doing now with respect to baseball analysis that we’ll be laughing or crying about in the future, so I don’t mean that literally.

What I am truly in disbelief about is that there are STILL saber-oriented writers and pundits who talk about the sac bunt attempt as if all that ever happens is an out and a runner advance. That’s indefensible. For cripes sake I wrote all about this in The Book 12 years ago. I have thoroughly debunked the idea that “bunts are bad because they considerably reduce the RE or WE.” They don’t. This is not controversial. It never was. It was kind of a, “Shit I don’t know why I didn’t realize that,” moment. If you still look at bunt attempts as an out and a runner advance instead of as an amalgam of all kinds of different results, you have no excuse. You are either profoundly ignorant, stubborn, or both. (I’ll give the casual fan a pass).

Anyway, without further ado, here is a summary of some of what I wrote in The Book 12 years ago about the sac bunt, and what I just obnoxiously tweeted in 36 or so separate tweets:

Maybe.

In this article, Tuffy Gosewisch, the new backup catcher for the Braves, talks catching with Fangraphs David Laurilia. He says about what you would expect from a catcher. Nothing groundbreaking or earth-shattering – nothing blatantly silly or wrong either. In fact, catchers almost always sound like baseball geniuses. They do have to be one of the smarter ones on the field. But…

Note: This is almost verbatim from my comment on that web page:

I have to wonder how much better a catcher could be if he understood what he was actually doing (of course they do, they get paid millions, they’ve been doing it all their lives, and are presumably the best in the world at what they do. Who the hell are you, you’ve never put on the gear in your life?).

All catchers talk about how they determine the “right” pitch. I’m waiting for a catcher to say, “There is no ‘right’ pitch – there can’t be! There’s a matrix of pitches and we choose one randomly. Because you see, if there were a ‘right” pitch and that was the one we called, the batter would know or at least have a pretty good idea of that same pitch and it would be a terrible pitch, especially if the batter were a catcher!”

If different catchers and pitchers have different “right” pitches and that’s why batters can’t guess them then there certainly isn’t a “right” pitch – it must be a (somewhat) random one.

When I say “random” I mean from a distribution of pitches, each with a pre-determined (optimal) frequency, based on the batter and the game situation. Rather than it be the catcher and pitcher’s job to come up with the “right” pitch – and I explained why that concept cannot be correct – it is their responsibility to come up with the “right” distribution matrix, for example, 20% FB away, 10% FB inside, 30% curve ball, 15% change up, etc. In fact, once you do that, you can tell the batter your matrix and it won’t make any difference! He can’t exploit that information and you will maximize your success as a pitcher, assuming that the batter will exploit you if you use any other strategy.

If a catcher could come up with the “right” single pitch that the batter is not likely to figure out, without randomly choosing one from a pre-determined matrix, well….that can’t be right, again, because whatever the catcher can figure, so can (and will) the batter.

We also know that catchers don’t hit well. If there were “right” pitches, catchers would be the best hitters in baseball!

Tuffy also said this:

“You also do your best to not be predictable with pitch-calling. You remember what you’ve done to guys in previous at-bats, and you try not to stay in those patterns. Certain guys — veteran guys — will look for patterns. They’ll recognize them, and will sit on pitches.”

Another piece of bad advice! Changing your patterns is being predictable! If you have to change your patterns to fool batters your patterns were not correct in the first place! As I said, the “pattern” you choose is the only optimal one. By “pattern” I mean a certain matrix of pitches thrown a certain percentage of time given the game situation and participants involved. Any other definition of “pattern” implies predictability so for a catcher to be talking about “patterns” at all is not a good thing. There should never be an identifiable pattern in pitching unless it is a random one which looks like a pattern. (As it turns out, researchers have shown that when people are shown random sequences of coin flips and ones that are chosen to look random but are not, people more often choose the non-random ones as being random.)

Say I throw lots of FB to a batter the first 2 times through order and he rakes (hits a HR and double) on them. If those two FB were part of the correct matrix I would be an idiot to throw him fewer FB in the next PA. Because if that were part of my plan, once again, he could (and would) guess that and have a huge advantage. How many times have you heard Darling, Smoltz or some other ex-pitcher announcer say something like, “After that blast last AB (on a fastball) the last thing he’ll do here is throw him another fastball in this AB?” Thankfully, for the pitcher, the announcer will invariably be wrong, and the pitcher will throw his normal percentage of fastballs to that batter – as he should.

What if I am mixing up my pitches randomly each PA but I change my mixture from time to time? Is that a good plan? No! The fact that I am choosing randomly from a matrix of pitches (each with a different fixed frequency for that exact situation) on each and every pitch means that I am “somewhat” unpredictable by definition (“somewhat” is in quotes because sometimes the correct matrix is 90% FB and 10% off-speed – is that “unpredictable?”) but the important thing is that those frequencies are optimal. If I constantly change those frequencies, even randomly, then they often will not be correct (optimal). That means that I am sometimes pitching optimally and other times not. That is not the overall optimal way to pitch of course.

The optimal way to pitch is to pitch optimally all the time (duh)! So my matrix should always be the same as long as the game situation is the same. In reality of course, the game situation changes all the time. So I should be changing my matrices all the time. But it’s not in order to “mix things up” and keep the batters guessing. That happens naturally (and in fact optimally) on each and every pitch as long as I am using the optimal frequencies in my matrix.

Once again, all of this assumes a “smart” batter. For a “dumb” batter, my strategy changes and things get complicated, but I am still using a matrix and then randomizing from it. Always. Unless I am facing the dumbest batter in the universe who is incapable of ever learning anything or perhaps if it’s the last pitch I am going to throw in my career.

There are only two correct things that a pitcher/catcher have to do – their pitch-calling jobs are actually quite easy. This is a mathematical certainty. (Again, it assumes that the batter is acting optimally – if he isn’t that requires a whole other analysis and we have to figure out how to exploit a “dumb” batter without causing him to play too much more optimally):

One, establish the game theory optimal matrix of pitches and frequencies given the game situation, personnel, and environment.

Two, choose one pitch randomly around those frequencies (for example, if the correct matrix is 90% FB and 10% off-speed, you flip a 10-side mental coin).

Finally, it may be that catchers and pitchers do nearly the right thing (i.e. they can’t be much better even if I explain to them the correct way to think about pitching – who the hell do you think you are?) even though they don’t realize what it is they’re doing right. However, that’s possible only to an extent.

Many people are successful at what they do without understanding what it is they do that makes them successful. I’ve said before that I think catchers and pitchers do randomize their pitches to a large extent. They have to. Otherwise batters would guess what they are throwing with a high degree of certainty and Ron Darling and John Smoltz wouldn’t be wrong as often as they are when they tell us what the pitcher is going to throw (or should throw).

So how is that catchers and pitchers can think their job is to figure out the “right” pitch (no one ever says they “flip a mental coin”) yet those pitches appear to be random? It is because they go through so many chaotic decision in their brain that for all intents and purposes the pitch selection often ends up being random. For example, “I threw him a fastball twice in a row so maybe I should throw him an off-speed now. But wait, he might be thinking that, so I’ll throw another fastball. But wait, he might be thinking that too, so…” Where they stop in that train of thought might be random!

Even if pitchers and catchers are essentially randomizing their pitches, two things are certain. They can’t possibly be coming up with the exact game theory optimal (GTO) matrices, and trust me there IS an optimal one (although it may be impossible for anyone to determine it, but I guarantee that someone can do a better job overall – it’s like man versus machine). Two, some pitchers and catchers will be better at pseudo-randomizing than others. In both cases there is a great deal of room for improvement on calling games and pitches.

Richard Nichols (@RNicholsLV on Twitter) sent me this link. These are notes that the author, Lee Judge, a Royals blogger for the K.C. Star, took during the season. They reflect thoughts and comments from players, coaches, etc. I thought I’d briefly comment on each one. Hope you enjoy!

Random, but interesting, things about baseball – Lee Judge

▪ If a pitcher does not have a history of doubling up on pickoff throws (two in a row) take a big lead, draw a throw and then steal on the next pitch.

Of course you can do that. But how many times can you get away with it? Once? If the pitcher or one of his teammates or coaches notices it, he’ll pick you off the next time by “doubling up.” Basically by exploiting the pitcher’s non-random and thus exploitable strategy, the runner becomes exploitable himself. A pitcher, of course, should be picking a certain percentage of the time each time he goes into the set position, based on the likelihood of the runner stealing and the value of the steal attempt. That “percentage” must be randomized by the pitcher and it “resets” each time he throws a pitch or attempts a pickoff.

By “randomize” I mean the prior action, pick or no pick, cannot affect the percentage chance of a pick. If a pitcher is supposed to pick 50% prior to the next pitch he must do so whether he’s just attempted a pickoff 0, 1, 2, or 10 times in a row. The runner can’t know that a pickoff is more or less likely based on how many picks were just attempted. In fact you can tell him, “Hey every time I come set, there’s a 50% (or 20%, or whatever) chance I will attempt to pick you off,” and there’s nothing he can do to exploit that information.

For example, if he decides that he must throw over 50% of the time he comes set (in reality the optimal % changes with the count), then he flips a mental coin (or uses something – unknown to the other team – to randomize his decision, with a .5 mean). What will happen on the average is that he won’t pick half the time, 25% of the time he’ll pick once only, 12.5% of the time he’ll pick exactly twice, 25% of the time he’ll pick at least twice, etc.

Now, the tidbit from the player or coach says, “does not have a history of doubling up.” I’m not sure what that means. Surely most pitchers when they do pick, will pick once sometimes and twice sometimes, etc. Do any pitchers really never pick more than once per pitch? If they do, I would guess that it’s because the runner is not really a threat and the one-time pick is really a pick with a low percentage. If a runner is not much of a threat to run, then maybe the correct pick percentage is 10%. If that’s the case, then they will not double-up 99% of the time and correctly so. That cannot be exploited, again, assuming that a 10% rate is optimal for that runner in that situation. So while it may look like they never double up, they do in fact double up 1% of the time, which is correct and cannot be exploited (assuming the 10% is correct for that runner and in that situation).

Basically what I’m saying is that this person’s comment is way to simple and doesn’t really mean anything without putting it into context as I explain above.

▪ Foul balls with two strikes can indicate a lack of swing-and-miss stuff; the pitcher can get the batters to two strikes, but then can’t finish them off.

Not much to say here. Some pitchers have swing-and-miss stuff and others don’t, and everything in-between. You can find that out by looking at…uh…their swing-and-miss percentages (presuming a large enough sample size to give you some minimum level of certainty). Foul balls with two strikes? That’s just silly. A pitcher without swing-and-miss stuff will get more foul balls and balls in play with two strikes. That’s a tautology. He’ll also get more foul balls and balls in play with no strikes, one strike, etc.

▪ Royals third-base coach Mike Jirschele will walk around the outfield every once in a while just to remind himself how far it is to home plate and what a great throw it takes to nail a runner trying to score.

If my coach has to do that I’m not sure I want him coaching for me. That being said, whatever little quirks he has or needs to send or hold runners the correct percentage of time is fine by me. I don’t know that I would be teaching or recommending that to my coaches – again, not that there’s anything necessarily wrong with it.

Bottom line is that he better know the minimum percentages that runners need to be safe in any given situation (mostly # of outs) – i.e. the break-even points – and apply them correctly to the situation (arm strength and accuracy etc.) in order to make optimal decisions. I would surely be going over those numbers with my coaches from time to time and then evaluating his sends and holds to make sure he’s not making systematic errors or too many errors in general.

▪ For the most part, the cutter is considered a weak contact pitch; the slider is considered a swing-and-miss pitch.

If that’s confirmed by pitch f/x, fine. If it’s not, then I guess it’s not true. Swing-and-miss is really just a subset of weak contact and weak contact is a subset of contact which is a subset of a swing. The result of a swing depends on the naked quality of the pitch, where it is thrown, and the count. So while for the most part (however you want to define that – words are important!) it may be true, surely it depends on the quality of each of the pitches, on what counts they tend to be thrown, how often they are thrown at those counts, and the location they are thrown to. Pitches away from the heart of the plate tend to be balls and swing-and-miss pitches. Pitches nearer the heart tend to be contacted more often, everything else being equal.

▪ With the game on the line and behind in the count, walk the big-money guys; put your ego aside and make someone else beat you.

Stupid. Just. Plain. Stupid. Probably the dumbest thing a pitcher or manager can think/do in a game. I don’t even know what it means and neither do they. So tie game in the 9th, no one on base, 0 outs, count is 1-0. Walk the batter? That’s what he said! I can think of a hundred stupid examples like that. A pitcher’s approach changes with every batter and every score, inning, outs, runners, etc. A blanket statement like that, even as a rule of thumb, is Just. Plain. Dumb. Any interpretation of that by players and coaches can only lead to sub-optimal decisions – and does. All the time. Did I say that one is stupid?

▪ A pitcher should not let a hitter know what he’s thinking; if he hits a batter accidentally he shouldn’t pat his chest to say “my bad.” Make the hitter think you might have drilled him intentionally and that you just might do it again.

O.K. To each his own.

▪ Opposition teams are definitely trying to get into Yordano Ventura’s head by stepping out and jawing with him; anything to make him lose focus.

If he says so. I doubt much of that goes on in baseball. Not that kind of game. Some, but not much.

▪ In the big leagues, the runner decides when he’s going first-to-third; he might need a coach’s help on a ball to right field — it’s behind him — but if the play’s in front of him, the runner makes the decision.

Right, we teach that in Little League (a good manager that is). You teach your players that they are responsible for all base running decisions until they get to third. Then it’s up to the third base coach. It’s true that the third base coach can and should help the runner on a ball hit to RF, but ultimately the decision is on the runner whether to try and take third.

Speaking of taking third, while the old adage “don’t make the first or third out at third base” is a good rule of thumb, players should know that it doesn’t mean, “Never take a risk on trying to advance to third.” It means the risk has to be low (like 10-20%), but that the risk can be twice as high with 0 outs as with 2 outs. So really, the adage should be, “Never make the third out at third base, but you can sometimes make the first out at third base.”

You can also just forget about the first out part of that adage. Really, the two-out break-even point is almost exactly in between the first-out and one-out one. In other words, with no outs, you need to be safe at third around 80% of the time, with one out, around 70%, and with two outs around 90%. Players should be taught that and not just the “rule of thumb.” They should also be taught that the numbers change with trailing runners, the pitcher, and who the next batter or batters are. For example, with a trailing runner, making the third out is really bad but making the first out where the trailing runner can advance is a bonus.

▪ Even in a blowout there’s something to play for; if you come close enough to make the other team use their closer, maybe he won’t be available the next night.

I’m pretty sure the evidence suggests that players play at their best (more or less) regardless of the score. That makes sense under almost any economic or cognitive theory of behavior since players get paid big money to have big numbers. Maybe they do partially because managers and coaches encourage them to do so with tidbits like that. I don’t know.

Depending on what they mean by blowout, what they’re saying is that, say you have a 5% chance of winning a game down six runs in the late innings. Now say you have a 20% chance of making it a 3-run or less game, and that means that the opponent closer comes into the game. And say that him coming into the game gives you another 2% chance of winning tomorrow because he might not be available, and an extra 1% the day after that (if it’s the first game in a series). So rather than a 5% win expectancy, you actually have a 5% plus 20% * 3% or, 5.6% WE. Is that worth extra effort? To be honest, a manager and coach is supposed to teach his players to play hard (within reason) regardless of the score for two reasons: One, because it makes for better habits when the game is close and two, at exactly what point is the game a blowout (Google the sorites paradox)?

▪ If it’s 0-2, 1-2 and 2-2, those are curveball counts and good counts to run on. That’s why pitchers often try pickoffs in those counts.

On the other hand, 0-2 is not a good count to run on because of the threat of the pitchout. As it turns out, the majority of SB attempts (around 68%) occur at neutral counts. Only around 16% of all steal attempts occur at those pitchers’ counts. So whoever said that is completely wrong.

Of course pitchers should (and do) attempt more pickoffs the greater the chance of a steal attempt. That also tends to make it harder to steal (hence the game theory aspect).

That being said, some smart people (e.g., Professor Ted Turocy of Chadwick Baseball Bureau) believe that there is a Nash equilibrium between the offense and defense with respect to base stealing (for most players – not at the extremes) such that neither side can exploit the other by changing their strategy. I don’t know if it’s true or not. I think Professor Turocy may have a paper on this. You can check it out on the web or contact him.

▪ Don’t worry about anyone’s batting average until they have 100 at-bats.

How about “Don’t worry about batting average…period.” In so many ways this is wrong. I would have to immediately fire whoever said that if it was a coach, manager or executive.

▪ It’s hard to beat a team three times in a row; teams change starting pitchers every night and catching three different pitchers having a down night is not the norm.

Whoever said this should be fired sooner than the one above. As in, before they even finished that colossally innumerate sentence.

▪ At this level, “see-it-and-hit” will only take you so far. The best pitchers are throwing so hard you have to study the scouting reports and have some idea of what’s coming next.

If that’s your approach at any level you have a lot to learn. That goes for 20 or 50 years ago the same as it does today. If pitchers were throwing maybe 60 mph not so much I guess. But even at 85 you definitely need to know what you’re likely to get at any count and in any situation from that specific pitcher. Batters who tell you that they are “see-it-and-hit-it” batters are lying to you or to themselves. There is no such thing in professional baseball. Even the most unsophisticated batter in the world knows that at 3-0, no outs, no runners on, his team is down 6 runs, he’s likely to be getting 100% fastballs.

▪ If a pitcher throws a fastball in a 1-1 count, nine out of 10 times, guess fastball. But if it’s that 10th time and he throws a slider instead, you’re going to look silly.

WTF? If you go home expecting your house to be empty but there are two giraffes and a midget, you’re going to be surprised.

▪ Good hitters lock in on a certain pitch, look for it and won’t come off it. You can make a guy look bad until he gets the pitch he was looking for and then he probably won’t miss it.

Probably have to fire this guy too. That’s complete bullshit. Makes no sense from a game-theory perspective or from any perspective for that matter. So just never throw him that pitch right? Then he can’t be a good hitter. But now if you never throw him the pitch he’s looking for, he’ll stop looking for it, and will instead look for the alternative pitch you are throwing him. So you’ll stop throwing him that pitch and then…. Managers and hitting coaches (and players) really (really) need a primer on game theory. I am available for the right price.

▪ According to hitting coach Dale Sveum, hitters should not give pitchers too much credit; wait for a mistake and if the pitcher makes a great pitch, take it. Don’t start chasing great pitches; stick to the plan and keep waiting for that mistake.

Now why didn’t I think of that!

▪ The Royals are not a great off-speed hitting club, so opposition pitchers want to spin it up there.

Same as above. Actually, remember this: You cannot tell how good or bad a player or team is at hitting any particular pitch by looking at the results. You can only tell by how often they get each type of pitch. Game theory tells us that the results of all the different pitches (type, location, etc.) will be about the same to any hitter. What changes depending on that hitter’s strengths and weaknesses are the frequencies. And this whole, “Team is good/bad at X” is silly. It’s about the individual players of course. I’m pretty sure there was at least one hitter on the team who is good at hitting off-speed.

Also, never evaluate or define “good hitting” based on batting average which most coaches and managers do even in 2016. I don’t have to tell you, dear sophisticated reader, that. However, you should also not define good or bad hitting on a pitch level based on OPS or wOBA (presumably on contact) either. You need to include pitches not put into play and you need to incorporate count. For example, at a 3-ball count there is a huge premium on not swinging at a ball. Your result on contact is not so important. At 2-strike counts, not taking a strike is also especially important. Whenever you see pitch level numbers without including balls not swung at, or especially only on balls put into play (which is usually the case), be very wary of those numbers. For example, a good off-speed hitting player will tend to have good strike zone recognition (and not necessarily good results on contact) skills because many more off-speed pitches are thrown in pitchers’ counts and out of the strike zone.

▪ According to catcher Kurt Suzuki, opposition pitchers should not try to strike out the Royals. Kansas City hitters make contact and a pitcher that’s going for punchouts might throw 100 pitches in five innings.

Wait. If they are a good contact team, doesn’t that mean that you can try and strike them out without running up your pitch count? Another dumb statement. Someone should tell Mr. Suzuki that pitch framing is really important.

▪ If you pitch down in the zone you can use the whole plate; any pitch at the knees is a pretty good pitch (a possible exception is down-and-in to lefties). If you pitch up in the zone you have to hit corners.

To some extent that’s true though it’s (a lot) more complicated than that. What’s probably more important is that when pitching down in the zone you want to pitch more away and when pitching up in the zone more inside. By the way, is it true lefties like (hit better) the down-and-in pitch more than righties? No, it is not. Where does that pervasive myth come from? Where do all the hundreds of myths that players, fans, coaches, managers, and pundits think are true come from?

▪ If you pitch up, you have to be above the swing path.

Not really sure what that means? Above the swing “path?” Swing path tends to follow the pitch so that doesn’t make too much sense. “Path” implies angle of attack and to say “above” or “below” an angle of attach doesn’t really make sense. Maybe he means, “If you are going to pitch high, pitch really high?” Or, “If the batter tends to be a high ball hitter, pitch really high?”

▪ Numbers without context might be meaningless; or worse — misleading

I don’t know what that means. Anything might be misleading or worthless without context. Words, numbers, apple pie, dogs, cats…

▪ All walks are not equal: a walk at the beginning of an inning is worth more than a walk with two outs, a walk to Jarrod Dyson is worth more than a walk to Billy Butler.

Correct. I might give this guy one of the other guys’ (that I fired) jobs. Players, especially pitchers (but batters and fielders too), should always know the relative value of the various offensive events depending on the batter, pitcher, score, inning, count, runners, etc., and then tailor their approach to those values. This is one of the most important things in baseball.

▪ So when you look at a pitcher’s walks, ask yourself who he walked and when he walked them.

True. Walks should be weighed towards bases open, 2 outs, sluggers, close games, etc. If not, and the sample is large, then the pitcher is likely either doing something wrong or he has terrible command/control or both. For example, Greg Maddux went something like 10 years before he walked his first pitcher.

▪ When a pitcher falls behind 2-0 or 3-1, what pitch does he throw to get back in the count? Can he throw a 2-0 cutter, sinker or slider, or does he have to throw a fastball down the middle and hope for the best?

All batters, especially in this era of big data, should be acutely aware of a pitcher’s tendencies against their type of batter in any given situation and count. One of the most important ones is, “Does he have enough command of his secondary pitches (and how good is his fastball even when the batter knows it’s coming) to throw them in hitter’s counts, especially the 3-2 count?”

▪ Hitters who waggle the bat head have inconsistent swing paths.

I never heard that before. Doubt it is anything useful.

▪ The more violent the swing, the worse the pitch recognition. So if a guy really cuts it loose when he swings and allows his head to move, throw breaking stuff and change-ups. If he keeps his head still, be careful.

Honestly, if that’s all you know about a batter, someone is not doing their homework. And again, there’s game theory that must be accounted for and appreciated. Players, coaches and managers are just terrible at understanding this very important part of baseball especially the batter/pitcher matchup. If you think you can tell a pitcher to throw a certain type of pitch in a certain situation (like if the batter swings violently throw him off-speed), then surely the batter can and will know that too. If he does, which he surely will – eventually – then he basically knows what’s coming and the pitcher will get creamed!

Let me explain game theory wrt sac bunting using tonight’s CLE game as an example. Bottom of the 10th, leadoff batter on first, Gimenez is up. He is a very weak batter with little power or on-base skills, and the announcers say, “You would expect him to be bunting.” He clearly is.

Now, in general, to determine whether to bunt or not, you estimate the win expectancies (WE) based on the frequencies of the various outcomes of the bunt, versus the frequencies of the various outcomes of swinging away. Since, for a position player, those two final numbers are usually close, even in late tied-game situations, the correct decision usually hinges on: On the swing side, whether the batter is a good hitter or not, and his expected GDP rate. On the bunt side, how good of a sac bunter is he and how fast is he (which affect the single and ROE frequencies, which are an important part of the bunt WE)?

Gimenez is a terrible hitter which favors the bunt attempt but he is also not a good bunter and slow which favors hitting away. So the WE’s are probably somewhat close.

One thing that affects the WE for both bunting and swinging, of course, is where the third baseman plays before the pitch is thrown. Now, in this game, it was obvious that Gimenez was bunting all the way and everyone seemed fine with that. I think the announcers and probably everyone would have been shocked if he didn’t (we’ll ignore the count completely for this discussion – the decision to bunt or not clearly can change with it).

The announcers also said, “Sano is playing pretty far back for a bunt.” He was playing just on the dirt I think, which is pretty much “in between when expecting a bunt.” So it did seem like he was not playing up enough.

So what happens if he moves up a little? Maybe now it is correct to NOT bunt because the more he plays in, the lower the WE for a bunt and the higher the WE for hitting away! So maybe he shouldn’t play up more (the assumption is that if he is bunting, then the closer he plays, the better). Maybe then the batter will hit away and correctly so, which is now better for the offense than bunting with the third baseman playing only half way. Or maybe if he plays up more, the bunt is still correct but less so than with him playing back, in which case he SHOULD play up more.

So what is supposed to happen? Where is the third baseman supposed to play and what does the batter do? There is one answer and one answer only. How many managers and coaches do you think know the answer (they should)?

The third baseman is supposed to play all the way back “for starters” in his own mind, such that it is clearly correct for the batter to bunt. Now he knows he should play in a little more. So in his mind again, he plays up just a tad bit.

Now is it still correct for the batter to bunt? IOW, is the bunt WE higher than the swing WE given where the third baseman is playing? If it is, of course he is supposed to move up just a little more (in his head).

When does he stop? He stops of course when the WE from bunting is exactly the same as the WE from swinging. Where that is completely depends on those things I talked about before, like the hitting and bunting prowess of the batter, his speed, and even the pitcher himself.

What if he keeps moving up in his mind and the WE from bunting is always higher than hitting, like with most pitchers at the plate with no outs? Then the 3B simply plays in as far as he can, assuming that the batter is bunting 100%.

So in our example, if Sano is indeed playing at the correct depth which maybe he was and maybe he wasn’t, then the WE from bunting and hitting must be exactly the same, in which case, what does the batter do? It doesn’t matter, obviously! He can do whatever he wants, as long as the 3B is playing correctly.

So in a bunt situation like this, assuming that the 3B (and other fielders if applicable) is playing reasonably correctly, it NEVER matters what the batter does. That should be the case in every single potential sac bunt situation you see in a baseball game. It NEVER matters what the batter does. Either bunting or not are equally “correct.” They result in exactly the same WE.

The only exceptions (which do occur) are when the WE from bunting is always higher than swinging when the 3B is playing all the way up (a poor hitter and/or exceptional bunter) OR the WE from swinging is always higher even when the 3B is playing completely back (a good or great hitter and/or poor bunter).

So unless you see the 3B playing all the way in or all the way back and they are playing reasonably optimally it NEVER matters what the batter does. Bunt or not bunt and the win expectancy is exactly the same! And if the 3rd baseman plays all the way in or all the way back and is playing optimally, then it is always correct for the batter to bunt or not bunt 100% of the time.

I won’t go into this too much because the post assumed that the defense was playing optimally, i.e. it was in a “Nash Equilibrium” (as I explained, it is playing in a position such that the WE for bunting and swinging are exactly equal) or it was correctly playing all the way in (the WE for bunting is still equal to or great than for swinging) or all the way back (the WE for swinging is >= that of bunting), but if the defense is NOT playing optimally, then the batter MUST bunt or swing away 100% of the time.

This is critical and amazingly there is not ONE manager or coach in MLB that understands it and thus correctly utilizes a correct bunt strategy or bunt defense.

I just downloaded my Kindle version of the brand spanking new Hardball Times Annual, 2014 from Amazon.com. It is also available from Createspace.com (best place to order).

Although I was disappointed with last year’s Annual, I have been very much looking forward to reading this year’s, as I have enjoyed it tremendously in the past, and have even contributed an article or two, I think. To be fair, I am only interested in the hard-core analytical articles, which comprise a small part of the anthology. The book is split into 5 parts, according to the TOC: The “2013 season,” which consists of reviews/views of each of the six divisions plus one chapter about the post-season. Two, general Commentary. Three, History, four, Analysis, and finally, a glossary of statistical terms, and short bios on the various illustrious authors (including Bill James and Rob Neyer).

As I said, the only chapters which interest me are the ones in the Analysis section, and those are the ones that I am going to review, starting with Jeff Zimmerman’s, “Shifty Business, or the War Against Hitters.” It is mostly about the shifts employed by infielders against presumably extreme pull (and mostly slow) hitters. The chapter is pretty good with lots of interesting data mostly provided by Inside Edge, a company much like BIS and STATS, which provides various data to teams, web sites, and researchers (for a fee). It also raised several questions in my mind, some of which I wish Jeff had answered or at least brought up himself. There were also some things that he wrote which were confusing – at least in my 50+ year-old mind.

He starts out, after a brief intro, with a chart (BTW, if you have the Kindle version, unless you make the font size tiny, some of the charts get cut off) that shows the number, BABIP, and XBH% of plays where a ball was put into play with a shift (and various kinds of shifts), no shift, no doubles defense (OF deep and corners guarding lines), infield in, and corners in (expecting a bunt). This is the first time I have seen any data with a no-doubles defense, infield in, and with the corners up anticipating a bunt. The numbers are interesting. With a no-doubles defense, the BABIP is quite high and the XBH% seems low, but unfortunately Jeff does not give us a baseline for XBH% other than the values for the other situations, shift, no shift, etc., although I guess that pretty much includes all situations. I have not done any calculations, but the BABIP for a no-doubles defense is so high and the reduction in doubles and triples is so small, that it does not look like a great strategy off the top of my head. Obviously it depends on when it is being employed.

The infield-in data is also interesting. As expected, the BABIP is really elevated. Unfortunately, I don’t know if Jeff includes ROE and fielder’s choices (with no outs) in that metric. What is the standard? With the infield in, there are lots of ROE and lots of throws home where no out is recorded (a fielder’s choice). I would like to know if these are included in the BABIP.

For the corners playing up expecting a bunt, the numbers include all BIP, mostly bunts I assume. It would have been nice had he given us the BABIP when the ball is not bunted (and bunted). An important consideration for whether to bunt or not is how much not bunting increases the batter’s results when he swings away.

I would also have liked to see wOBA or some metric like that for all situations – not just BABIP and XBH%. It is possible, in fact likely, that walk and K rates vary in different situations. For example, perhaps walk rates increase when batters are facing a shift because they are not as eager to put the ball in play or the pitchers are trying to “pitch into the shift” and are consequently more wild. Or perhaps batters hit more HR because they are trying to elevate the ball as opposed to hitting a ground ball or line drive. It would also be nice to look at GDP rates with the shift. Some people, including Bill James, have suggested that the DP is harder to turn with the fielders out of position. Without looking at all these things, it is hard to say that the shift “works” or doesn’t work just by looking at BABIP (and even harder to say to what extent it works).

Jeff goes on to list the players against whom the shift is most often employed. He gives us the shift and no shift BABIP and XBH%. Collectively, their BABIP fell 37 points with the shift and it looks like their XBH% fell a lot too (although for some reason, Jeff does not give us that collective number, I don’t think). He writes:

…their BABIP [for these 20 players] collectively fell 37 points…when hitting with the shift on. In other words, the shift worked.

I am not crazy about that conclusion – “the shift worked.” First of all, as I said, we need to know a lot more than BABIP to conclude that “the shift worked.” And even if it did “work” we really want to know by how much in terms of wOBA or run expectancy. Nowhere is there an attempt by Jeff to do that. 37 points seems like a lot, but overall it could be only a small advantage. I’m not saying that it is small – only that without more data and analysis we don’t know.

Also, when and why are these “no-shifts” occurring? Jeff is comparing shift BIP data to no-shift BIP data and he is assuming that everything else is the same. That is probably a poor assumption. Why are these no-shifts occurring? Probably first and foremost because there are runners on base. With runners on base, everything is different. It might also be with a completely different pool of pitchers and fielders. Maybe teams are mostly shifting when they have good fielders? I have no idea. I am just throwing out reasons why it may not be an apples-to-apples comparison when comparing “shift” results to “no-shift” results.

It is also likely that the pool of batters is different with a shift and no shift even though he only looked at the batters who had the most shifts against them. In fact. a better method would have been a “delta” method, whereby he would use a weighted average of the differences between shift and no-shift for each individual player.

He then lists the speed score and GB and line drive pull percentages for the top ten most shifted players. The average Bill James speed score was 3.2 (I assume that is slow, but again, I don’t see where he tells us the average MLB score), GB pull % was 80% and LD pull % was 62%. The average MLB GB and LD pull %, Jeff tells us, is 72% and 50%, respectively. Interestingly several players on that list were at or below the MLB averages in GB pull %. I have no idea why they are so heavily shifted on.

Jeff talks a little bit about some individual players. For example, he mentions Chris Davis:

“Over the first four months of the season, he hit into an average of 29 shifts per month, and was able to maintain a .304 BA and a .359 BABIP. Over the last two months of the season, teams shifted more often against him…41 times per month. Consequently, his BA was .250 and his BABIP was .293.

The shift was killing him. Without a shift employed, Davis hit for a .425 BABIP…over the course of the 2013 season. When the shift was set, his BABIP dropped to .302…

This reminds me a little of the story that Daniel Kahneman, 2002 Nobel Prize Laureate in Economics, tells about teaching military flight instructors that praise works better than punishment. One of the instructors said:

“On many occasions I have praised flight cadets for clean execution of some aerobatic maneuver, and in general when they try it again, they do worse. On the other hand, I have often screamed at cadets for bad execution, and in general they do better the next time.”

Of course the reason for that was “regression towards the mean.” No matter what you say to someone who has done poorer than expected, they will tend to do better next time, and vice versa for someone who has just done better than expected.

If Chris Davis hits .304 the first four months of the season with a BABIP of .359, and his career numbers are around .260 and .330, then no matter what you do against him (wear your underwear backwards, for example), his next two months are likely going to show a reduction in both of these numbers! That does not necessarily imply a cause and effect relationship.

He makes the same mistake with several other players that he discusses. I fact, I have always had the feeling that at least part of the “observed” success for the shift was simply regression towards the mean. Imagine this scenario – I’m not saying that this is exactly what happens or happened, but to some extent I think it may be true. You are a month into the season and for X number of players, say they are all pull hitters, they are just killing you with hits to the pull side. Their collective BA and BABIP is .380 and .415. You decide enough is enough and you decide to shift against them. What do you  think is going to happen and what do you think everyone is going to conclude about the effectiveness of the shift, especially when they compare the “shift” to “no-shift” numbers?

Again, I think that the shift gives the defense a substantial advantage. I am just not 100% sure about that and I am definitely not sure about how much of an advantage it is and whether it is correctly employed against every player.

Jeff also shows us the number of times that each team employs the shift. Obviously not every team faces the same pool of batters, but the differences are startling. For example, the Orioles shifted 470 times and the Nationals 41! The question that pops into my mind is, “If the shift is so obviously advantageous (37 points of BABIP) why aren’t all teams using it extensively?” It is not like it is a secret anymore.

Finally, Jeff discusses bunting to beat the shift. That is obviously an interesting topic. Jeff shows that not many batters opt to do that but when they do, they reach base 58% of the time. Unfortunately, out of around 6,000 shifts where the ball was put into play, players only bunted 48 times! That is an amazingly low number. Jeff (likely correctly) hypothesizes that players should be bunting more often (a lot more often?). That is probably true, but I don’t think we can say how often and by whom? Maybe most of the players who did not bunt are terrible bunters and all they would be doing is bunting back to the pitcher or fouling the ball off or missing. And BTW, telling us that a bunt results in reaching base 58% of the time is not quite the whole story. We also need to know how many bunt attempts resulted in a strike. Imagine that if a player attempted to bunt 10 times, fouled it off or missed it 9 times and reached base once.  That is probably not a good result even though it looks like he bunted with a 1.000 average!

It is also curious to me that 7 players bunted into a shift almost 4 times each, and reached base 16 times (a .615 BA). They are obviously decent or good bunters. Why are they not bunting every time until the shift is gone against them? They are smart enough to occasionally bunt into a shift, but not smart enough to always do it? Something doesn’t seem right.

Anyway, despite my many criticisms, it was an interesting chapter and well-done by Jeff. I am looking forward to reading the rest of the articles in the Analysis section and if I have time, I will review one or more of them.

I’m talking about John Farrell and the Boston Red Sox. They had 24 sacrifice bunt attempts during the regular season, the 4th fewest in baseball. I don’t know how many they attempted or where they rank in attempts.

In game 6 of the ALCS, Boston attempted 2 sacrifice bunts, one with Victorino and runners on first and second, and one with Drew and a runner on second. With Victorino the game was tied, and with Drew the Sox were down by a run.

There is nothing necessarily wrong with both of those attempts. As I have always said, in a potential bunt situation, if the batter is good bunter and fast (I assume both of those batters are), he can bunt some (specific) percentage of the time on a random basis, as long as the infield is not overplaying one way or another. If the infield is playing optimally, according to game theory, then it doesn’t matter whether the batter bunts or not – the win expectancy (WE) should be the same for both strategies. That is the definition of the defense playing optimally – making the offense agnostic as far as bunting or hitting away is concerned.

Now, it is possible that even if the infield is playing up as far as they can, the bunt can still have a higher WE than hitting away. I suspect for that to be the case, the batter has to be a very poor hitter and an excellent bunter with good or great speed. It is also possible for the defense to be playing back all the way yet the WE for hitting away is still greater than the WE for bunting. That is often the case with good hitters at the plate who are also not good bunters and/or they are not fast. However, if the defense is playing anywhere but all the way back (as they would if it were not a potential bunt situation) or all the way in, the assumption is that they are playing in a configuration such that the batter can bunt or not bunt and the WE is exactly the same. If that isn’t true, then the batter must either bunt a lot (if the defense is playing too far back) or hit away a lot (if the defense is playing too far in).

Back to these two situations. The thing about the defense and the WE (of both bunting and not bunting) is that the latter is not static throughout the PA. As the count changes, so does the WE for both the bunt and hitting away, especially hitting away. That is obvious, right? If the count goes to 1-0, the batter becomes a better hitter. To a lesser extent, even if the defense remains the same, even the WE of the bunt attempt probably goes up. One, you are more likely to get a buntable pitch, two, if you bunt foul or take a strike, you are now 1-1 rather than 0-1, and three, since you don’t have to offer at every pitch even when bunting, you are more likely to ultimately draw a walk when attempting to bunt at a 1-0 count.

As the count changes, the defense should move to reflect the fact that the WE from hitting away likely changes more than the WE for bunting. If the count goes in the hitter’s favor, they should move back. It is not so much that they now anticipate the bunt less often, although they should, it is just that they want to play in such a way that they make the WE from the bunt and hitting away exactly the same – and that requires moving back in hitter’s counts (and up in pitcher’s counts other than with 2 strikes). So really, even when the count changes, the batter should still be agnostic as far as bunting or hitting away in concerned – it shouldn’t matter what they do.

But, we all know that managers often employ less than optimal strategies, especially when it comes to the sacrifice bunt, both on offense and on defense. It is likely that the defense did not move back when the count when to 1-0 on Victorino and 2-1 later on Drew. If they did move back at the 1-0 or 2-1 count, then either the bunt or the non-bunt would be justified. Let’s assume that the defense didn’t move though. And let’s use run expectancy (RE) rather than WE to for my analysis, just for simplicity sake.

In a low run environment, the RE with runners on first and second and 0 outs is around 1.5 runs. Let’s assume that that is the case with the defense playing a little up in anticipation of a possible bunt. If the defense is playing optimally, the RE for the bunt and hitting away should both be 1.5 runs, given the batter, pitcher, fielders, etc. Again, at that point it doesn’t matter whether the batter bunts or not. Now the count goes to 1-0. How much does that affect WE? At a 1-0 count, instead of a RE of 1.5 runs, it is around 1.56, so somehow the bunt has to be worth at least that much for it to be correct to bunt. The only way that is possible, assuming that the bunt and hitting away had the same RE when the AB started, was for the defense to back up at the 1-0 count. Even if the defense did move back, for the offense to be playing optimally according to game theory, when the count goes to 1-0, the batter has to hit away more often!

In case you are actually able to follow this, you might be asking, “Why must the offense still bunt and hit way in a certain proportion even when the defense makes them agnostic to their own strategy?” If they don’t, the defensive team can change their positioning at some point before the pitch arrives at the plate or the batter gives away his intention. As well, it tips off the defense the next time this situation comes up, although you can change your strategy to account for that.

The other thing is that Victorino used to be a switch hitter. In fact, I could swear that he hit from the left side in game 3 or game 4. If Victorino bats from the left side, the RE from hitting away with runners on first and second is higher for 2 reasons: One, fewer GDP, and two, he moves the runners over more often on an out.

Which brings up the second instance with Drew at the plate and a runner on second only. With a runner on second and no outs, the RE is around 1.13 runs. At a 2-1 count, it is 1.18. So, you have a similar situation as you had with Victorino. If the defense does not change their position with the count, you must switch to hitting away (at least a greater percentage of the time), and if they move back, you still must hit away more often, on a random basis. Again, I doubt that Detroit changed their defensive alignment. I am pretty sure that Jim Leyland was absent from class on the day that they went over game theory. And of course, before the count went to 2-1, it started out at 1-0 and then 1-1. The 1-0 count, as with Victorino, was another good time to switch to hitting away (you could then switch back to bunting at 1-1 and then not bunting again at 2-1, although with this kind of strategy you risk being too predictable).

The worst part about this bunt was that Drew is a lefty. I don’t know why lots of managers insist on bunting runners over from second base with a lefty batter. Surely they realize that he is going to move the runner over on an out when hitting away a significant percentage of the time. With a lefty batter and a runner on second, even if he is a good bunter and fast, you probably want to bunt much less often if at all. And the defense should play accordingly (not nearly as far in as with a comparable – in hitting and bunting ability, and speed – righty batter), in which case the offense would be agnostic as to their strategy.

To give you an idea of the difference between having a lefty and righty batter at the plate with a runner on second and no outs, here are the respective RE’s (there is no guarantee that they of equal hitting talent of course):

RHB: 1.104

LHB: 1.157

That is a pretty big difference. So, the RE from bunting if you are a left-handed hitter like Drew (and Victorino if he batted lefty) has to be a lot higher in order to justify a bunt attempt, as compared to a right-handed batter. Combine that with the 1-0 or 2-1 count and the bunt becomes questionable. Then again, it depends where the defense is playing, as always. If they are playing optimally, given the handedness of the batter (along with everything else), then it doesn’t matter what the batter does. And so the defense cannot take advantage of the offense, the batter must bunt and hit away in some exact proportion which makes the defense agnostic to their positioning (wherever they play, the RE from the bunt/hit away strategy of the offense is the same).

By the way, does that pitch from Veras go down in post-season history as one of the most predictable and worst location pitches on an 0-2 count ever? You probably have to throw the fastball more than you normally would at that count because you cannot afford to bounce a curve ball (especially with the gimpy Avila behind the plate) and you surely want to throw the curve ball in the dirt in that situation, if you choose to throw the curve ball.

People are notoriously bad at recollecting things they hear and see, especially when they happen a long time ago. For example, the Innocence Project reports eyewitness misidentification occurs in approximately 75% of convictions that are overturned.

I submit that people are also poor at understanding the things that they do, even if they are experts at it and did it successfully for a long time. Professional golfers were once asked whether the orientation of the club face on a golf club or the direction of the swing was the primary determinant of the initial direction of the ball. In other words, if you swing to the right, but your clubface is pointed to the left at impact, which way does the ball start out? I forgot the numbers, but a significant percentage of PGA golfers answered incorrectly. If you care, it is mostly the angle of the clubface which determines the initial direction of the ball.

Tonight in the Braves game, in an AB by McCann with Kershaw pitching, Ron Darling, an excellent pitcher during his career and a Yale graduate to boot, remarked when the count went to 2-2 and Kershaw had thrown several fastballs, “He is surely going to throw the curve ball (or slider) now.” On its face, that is an absurd statement. If Darling knows that to any degree, then surely so does the batter, who happens to be a catcher! So that can’t possibly be correct! Of course, Kershaw threw another fastball. Darling immediately said, “Well, he decided to go with one more fastball and then the off-speed pitch.” Are you kidding me? Same shizit, different day. If Darling is that certain now, then surely so is McCann.

All pitchers, and especially great ones like Kershaw, randomize their pitch selections precisely so the batter cannot figure out what is coming with any certainty. Now, if in a certain count and certain situation, a certain pitcher throws 80% fastballs, then obviously the batter can “look” fastball and be right 80% of the time. But, still he does not know any more than he has an 80% chance of getting a fastball. He does not know, or should not be able to deduce, anything other than that 80/20 split based on the previous pitch or pitches. That is what it means to randomize your pitches. That you cannot tell what is coming based on prior pitches.

The concept of “set-up pitches” is largely a fallacy other than the fact that they may change the percentages. For example, if (and that is a big IF) throwing a high inside fastball actually makes a breaking pitch more effective on the next pitch, even if the batter knows that it is more likely to be coming, then you might throw 30% breaking pitches whereas if the previous pitch were a low and away fast ball or another breaking pitch, then maybe you would throw only 20% breaking balls. Let me put it another way.  A pitcher throws a high inside fastball. Now the count is 2-2 and the pitcher normally throws 50% off-speed and 50% fastballs at a 2-2 count with this batter and this exact situation. Are we to believe that he can throw 60% or 70% curve balls now, and yet if the last pitch were something else, he would throw 30% or 40% curve balls? If that were the case, then the batter would now know which way the pitcher was leaning based on the last pitch. That can’t be correct unless somehow the curve ball is more effective after a high inside fastball than it is after another pitch, even at the same frequency. That might be the case (I am not saying that it isn’t), but the batter can surely neutralize that by simply forgetting about the last pitch. Plus, again if the pitcher now throws the curve ball more often, the batter has the luxury of knowing that and he can now look  for the curve ball, thus making it less effective, presumably.

I hope that was clear, because it is a very important point.

Anyway, the point is that Darling, as a successful pitcher, clearly randomized his pitches in all situations, as do all pitchers. All he can tell you, as an ex-pitcher, are the percentages at any given point. He cannot tell what a pitcher is going to throw with any certainty unless those percentages reflect that.

And more importantly, those percentages should almost never be predicated on what was previously been thrown – only the count, batter, game situations, etc. If those percentages are predicated on previous pitches, then the batter can more easily figure out what you are going to throw AND those percentages will become sub-optimal (again, with the caveat that it might be true that a certain pitch makes it harder or easier to hit a certain subsequent pitch even if the batter knows that, which he surely does). In other words, if a pitch is predicated on previous pitches, for example, if you throw 5 fastballs in a row, you are more likely to throw an off-speed pitch, even at the same count, then you are not randomizing your pitches. That IS the definition of randomization. Darling should know that, but somehow when words come out of his mouth, he doesn’t.

That is why when you think you are getting good analysis from ex-players, because they are ex-players, you often are not.