For shits and giggles, I decided to plug this into the basic runs created formula. Player A creates 110 Player B creates 103.. so I guess that proves another one of my points that its more accurate to multiply than to add OBP and SLG

Which reminds me of another one of your lies when you completely fabricated that no serious researcher would do such a thing. Well no serious mathematician would put OBP and SLG in the same linear regression. And no serious analyst would claim a point of OBP is 1.8 more valuable than a point of SLG.>>

That doesn't prove that "it's more accurate to multiply than to add OBP + Slug" because you didn't compare that same result to 1.8 * OBP + Slug. You would find that the guy with the higher OBP also came out more valuable using that method.

I will take your word for it that you got that RC result. So now it is time for you to take my word for something. I pulled the league numbers of last 20 years from BR, put them into Excel, then calculated the 20 year average on Runs, OBP, Slug, OPS, OBP*1.8+ Slug, and OBP* Slug. Calculated each season's STDEV for all categories against their 20 yr Avg. and then looked to see which of the categories STDEV most closely resembled Runs STDEV in every season. Bottom line:

the winner of the most years' resemblance was 1.8*OBP. It better resembled Runs in 7 seasons, followed by raw OBP at 5 seasons, OPS at 4 seasons, raw Slug at 3 seasons, then OBP*Slug 1 season. So much for "multiplying is superior to adding.".

However, to be completely fair when the entire 20 season STDEV's were averaged for all categories 1.8 * OBP and OBP* Slug finished in a virtual tie at .115 average deviation from Runs. Raw OPS came in at .14 and raw OBP at .26, Slug at .29. While OBP and Slug were better correlators in some seasons, they went completely off the rails in the other seasons. 2017 being the perfect example where OBP had a .74 correlation and Slug .55.

now actual seasons' results.

I am no longer interested in running numbers ad infinitum in an effort to convince someone who never was going to be objective about this subject anyway because his only interest is trying to "embarrass" or "expose" me by whatever contortions possible. There is a reason that at least a dozen research papers have confirmed that the 1.7 or 1.8 theory is valid.

But I will end by conceding one important thing to you though. While the 1.8 theory is valid it didn't add much to the party. It is not significantly more accurate than raw OPS on a season by season basis. While 1.8 did kick OBP*Slug's ass by beating it 13 to 7 seasons in head to head comparisons, on the whole there was hardly any difference over the entire 20 seasons.

This is why, like so many other things including James' old RC, 1.8 theory yielded to a slightly better method and that today is w/OBA.

I see no reason to continue this unless you have a substantive real point to make. If it's just about more insults or accusations, then go pound sand.

Which reminds me of another one of your lies when you completely fabricated that no serious researcher would do such a thing. Well no serious mathematician would put OBP and SLG in the same linear regression. And no serious analyst would claim a point of OBP is 1.8 more valuable than a point of SLG.>>

That doesn't prove that "it's more accurate to multiply than to add OBP + Slug" because you didn't compare that same result to 1.8 * OBP + Slug. You would find that the guy with the higher OBP also came out more valuable using that method.

I will take your word for it that you got that RC result. So now it is time for you to take my word for something. I pulled the league numbers of last 20 years from BR, put them into Excel, then calculated the 20 year average on Runs, OBP, Slug, OPS, OBP*1.8+ Slug, and OBP* Slug. Calculated each season's STDEV for all categories against their 20 yr Avg. and then looked to see which of the categories STDEV most closely resembled Runs STDEV in every season. Bottom line:

the winner of the most years' resemblance was 1.8*OBP. It better resembled Runs in 7 seasons, followed by raw OBP at 5 seasons, OPS at 4 seasons, raw Slug at 3 seasons, then OBP*Slug 1 season. So much for "multiplying is superior to adding.".

However, to be completely fair when the entire 20 season STDEV's were averaged for all categories 1.8 * OBP and OBP* Slug finished in a virtual tie at .115 average deviation from Runs. Raw OPS came in at .14 and raw OBP at .26, Slug at .29. While OBP and Slug were better correlators in some seasons, they went completely off the rails in the other seasons. 2017 being the perfect example where OBP had a .74 correlation and Slug .55.

**I have had enough of this discussion and so has the Board.**I have shown that the people who came up with the 1.8 theory are NOT full of shit as you keep asserting. Both in looking at Run Expectancy proof andnow actual seasons' results.

I am no longer interested in running numbers ad infinitum in an effort to convince someone who never was going to be objective about this subject anyway because his only interest is trying to "embarrass" or "expose" me by whatever contortions possible. There is a reason that at least a dozen research papers have confirmed that the 1.7 or 1.8 theory is valid.

**Because it is.**But I will end by conceding one important thing to you though. While the 1.8 theory is valid it didn't add much to the party. It is not significantly more accurate than raw OPS on a season by season basis. While 1.8 did kick OBP*Slug's ass by beating it 13 to 7 seasons in head to head comparisons, on the whole there was hardly any difference over the entire 20 seasons.

This is why, like so many other things including James' old RC, 1.8 theory yielded to a slightly better method and that today is w/OBA.

I see no reason to continue this unless you have a substantive real point to make. If it's just about more insults or accusations, then go pound sand.