Tuesday, December 01, 2020

Comment on the Substack post "Anomalies in Vote Counts and Their Effects on Election 2020"

[Note: I originally wrote this as a Comment for the Substack post, but the comments section for that article is a train wreck. So I am putting it here, as a new post on this dormant blog, which I started to record thoughts on a technical issue in Holocene climate reconstruction. It is exceptionally difficult to fish out information from the good Substack comments, so this is disappointingly incomplete. But I am out of time. Maybe I'll edit/improve it later. --AMac78]

On 11/24/20, the pseudonymous "Vote Integrity" posted "Anomalies in Vote Counts and Their Effects on Election 2020: A Quantitative Analysis of Decisive Vote Updates in Michigan, Wisconsin, and Georgia on and after Election Night".

The author designed and employed a method to highlight unusual and possibly fraudulent additions to the vote counts in battleground states. To do this, he or she advantage of the time-series data that Edison Research created for each of the 50 states for "National Election Pool" subscribers, and that the New York Times posted online.

Two excellent tools:

* Substack commenter Steve created a script that pulls the Edison data and generates time-series graphs (like the author's Fig. 1/Michigan and Fig. 2/Wisconsin) for the state of your choice, at this page.

* Substack commenter Mario Delgado coded a self-service application that uses the same source to generate anomaly profiles (like the author's Fig. 3/Michigan and Fig. 5/Wisconsin) for the state of your choice, here.

The author uses statistical procedures that identify "anomalous" batches of votes that were added to the State Election Boards' running counts, beginning when the polls closed on the evening of Tuesday, Nov. 3, and continuing for the next few days. The core assumption is that the voters of each state are evenly distributed. In other words, each subset (precinct - city - county) has more-or-less the same percentage of Biden and Trump voters. In addition, Biden and Trump voters would be more-or-less equally inclined to vote in person or by mail... and so forth.

The author isn't stupid, s/he knows that these assumptions aren't completely true. This is taken as a starting point to find possible needles (phony votes) in a haystack of genuine returns.

Readers who are unfamiliar with the field should be aware that there have been multiple informed criticisms of this approach by commenters who are very knowledgable in statistics and forensics. Thus, my advice is to be very wary of accepting the author's methods and conclusions, without considering the merits of those critiques. I raise this as a warning -- that subject is beyond the scope of this comment. And, unfortunately, sorting through thousands of Substack comments to find these solid criticisms would be a major chore.

In the section "Quantifying the Extremity," the author presents a table with ten anomalies -- suspicious batches of votes that were added to the Presidential race's tallies, late on Election Day or in the early morning hours of the following day. S/he highlights (in yellow) these four entries from battleground states:

Anomaly 1. Michigan batch on 11/4/20 at 6:31AM EST -- 141,258 Biden / 5,968 Trump

Anomaly 2. Wisconsin batch on 11/4/20 at 3:42AM CST -- 143,379 Biden / 25,163 Trump

Anomaly 3. Georgia batch on 11/4/20 at 1:34AM EST -- 136,155 Biden / 29,115 Trump

Anomaly 4. Michigan batch on 11/4/20 at 3:50AM EST -- 54,497 Biden / 4,718 Trump

As of this writing (12/1/20 1430 GMT), the comments to "Anomalies in Vote Counts..." have provided strong evidence that:

* Anomaly 1 is explained by the City of Detroit's report of most of its Absentee ballots to the Michigan Secretary of State.

* Anomaly 2 is explained by Milwaukee's report of most if its Absentee ballots to the Wisconsin Elections Commission.

I don't know if other commenters have explained Anomalies 3 or 4.

I've written up this summary because the Substack commenting system as adopted by "Vote Integrity" fails to support an informed discussion of the points raised in the original article. As far as I can tell, submitted comments are impossible to search, and nearly impossible to link effectively. So the most civil and informative remarks get buried, and people new to the post never see them. Discussion doesn't build on what has been linked, discovered, and discussed. Instead, amnesia rules the thread. Hard-won insights on technical issues are overlooked, forgotten or ignored. As a result, everything keeps getting re-litigated from the beginning. Tempers fray.

Anomaly 1. Michigan batch on 11/4/20 at 6:31AM EST

The City of Detroit published the PDF "November 2020 Election Summary Report Signed Copy" on its November 3, 2020 General Election Official Results page

. Page 2 breaks down election results, 100% of 637 precincts reporting.

74,733 Election Day and 166,203 Absentee votes for Biden

6,736 Election Day and 6,153 Absentee votes for Trump

1,126 Election Day and 1,081 Absentee votes for Other candidates

(also 373 Election Day and 109 Absentee unresolved write-ins)

Anomaly 1 is 141,258 Biden / 5,968 Trump / 2,546 Other thus 94.3% / 4.0% / 1.7%

Detroit City Absentee is 166,203 Biden / 6,153 Trump / 1,081 Other thus 95.8% / 3.5% / 0.6%

Commenter "Mike" linked sources that showed that Detroit counted its own absentee ballots, and submitted them directly to the State -- they were not consolidated with absentee ballots from the other jurisdictions in Wayne County.

Anomaly 2. Wisconsin batch on 11/4/20 at 3:42AM CST

Page with downloadable Wisconsin vote data, by county.

Milwaukee City (pop. 590,000) is one of 19 municipalities in Milwaukee County (946,000). This County government page says the City recorded 169,519 absentee ballots. I don't see a total for the County total. It looks like the municipalities report to the County, which then reports to the Wisconsin Elections Commission (is that right?). But without a breakdown of Biden and Trump counts for absentee ballots, it's going to be impossible to a detailed comparison of the Milwaukee submission with Anomaly 2.

Milwaukee Journal Sentinel article of 11/4/20, "Biden overtook Trump in the early morning hours when Milwaukee reported its roughly 170,000 absentee votes, which were overwhelmingly Democratic."

Commenter "Eric 377" wrote on 11/29/20: "I live in Wisconsin... My understanding is that update [listed by 'Vote Integrity', i.e. Anomaly 2] is Milwaukee County data. Milwaukee County is the state's most populous and easily the highest number of votes cast, yet the elections staff is proportionally not smaller than other counties. Prior to the election, the Milwaukee media reported that the elections team was very well prepared, but the size of the team, their resources including numbers of voting machines and the vote totals tell us that their actual performance was the worst in the state, and by a lot, compared with other counties with similar 'per vote' resources... It seems to me that the most 'acceptable' rationale for Milwaukee being hours behind where they were expected to be would be exactly that the team was a lot less efficient than teams in the rest of the state. The second largest county in the state, Dane (also the second greatest source of Biden votes) reported nearly 100% of their votes almost 6 hours earlier than Milwaukee."

Conclusion

As mentioned at the onset, I'm posting this to serve as a point of reference for commenters at "Vote Integrity's" Substack article. New readers should be aware that Anomalies 1 and 2 are likely explained by ordinary vote-counting mechanisms. That means they aren't telltales of, say, a hacker injecting tens of thousands of phantom votes into the Michigan or Wisconsin vote-counting systems.

I may or may not edit the post further, depending on how much more time I can afford to sink into this hobby.

Sunday, August 14, 2011

Lightsum and Darksum are Calculated, not Measured

In last year's post The Tiljander Data Series: Data and Graphs, I explained that the four Tiljander data series were actually three: Darksum is calculated as (Thickness minus Lightsum).

I've since discovered that there are actually two Tiljander data series rather than four.

Thickness and XRD are measured values.

Lightsum and Darksum are values that Tiljander et al. calculated by multiplying Thickness and XRD.

Here are the formulas. Varve thicknesses are measured in microns (thousandths of a millimeter, um).

Lightsum = Thickness * XRD * 0.003937

Darksum = Thickness * ( 1 - ( XRD * 0.003937 ))

Solving these two equations for Thickness yields

Thickness = Lightsum + Darksum

The calculated values of Lightsum are within 0.01% of the values archived at NCDC. For Darksum, the calculated values are consistently 0.5% to 0.8% too low. Presumably, this is a rounding error.

[UPDATE Aug 15, 2011 -- Commenter HaroldW figured out the exact formulas by which Lightsum and Darksum are calculated. It strongly suggests that Tiljander et al. made a minor arithmetic error in their formulae, such that

Thickness = Lightsum + (( 255/254 ) * Darksum )

"Exact" means that the calculated values of LS and DS agree with the archived values to within 0.001%. I've updated the Excel file at BitBucket to reflect HaroldW's insight.]

"Discovered" as used above is tongue-in-cheek. Obviously, the authors of Tiljander03 have known from the outset that this was their procedure. However, this finding is new to me. Presumably, it is also news to the authors of Mann08, Mann09, Kaufman09, and to other people who take an interest in paleoclimate reconstructions.

"Does it matter?" From a statistical point of view, yes, it does.

Sunday, July 10, 2011

Pattern Recognition

Scientists pride themselves in the ability to tease informative patterns out of masses of data. And with good reason -- that skill (or aptitude) is one of the traits that leads to insight, and thus publications and professional success.
I don't believe that gazing at "spaghetti graph" reconstructions is the best way to evaluate whether or not the Tiljander data series were used correctly in Mann08 (for links to referred-to papers and posts, see here). That's a question that's better answered by reading her paper (Tiljander03), getting a feel of what her data looks like (graphs here), and thinking about the physical meaning of the varve characteristics that go into "XRD," "lightsum," "darksum," and "thickness."

By weaving these threads together, we can figure out the solution to this puzzle:

Can the Tiljander data series be meaningfully calibrated to the instrumental temperature record, 1850-1995?

The answer is No.

There might be a way to indirectly achieve such a calibration, which was the approach that authors of Kaufman09 took with XRD after belatedly coming to grips with this problem. But there's no feasible direct approach, of the type used in Mann08 and Mann09.

This has proven to be a very contentious point. But there's no good reason it should be seen as such. Truly contentious questions have strong arguments on each side of the issue. The defenders of Mann08 don't even argue for "Yes," but rather for a stance akin to "I don't know, and it doesn't matter."

That's silly.

Knowing that the Tiljander data series were massively contaminated by non-climate signals in the 19th and 20th centuries, we can look for patterns in the reconstructions presented in Mann08 and Mann09.

Let's consider a few cartoons.

Thursday, June 23, 2011

Voldemort's Question

Updated June 25 & 26, 2011 -- see end of post

Are the Tiljander proxies calibratable to the instrumental temperature record, 1850-1995?

Reader Alex Harvey copied his submission to RealClimate.org as a comment to the just-prior post at this blog, "The Tiljander Data Series Appear Again, This Time in a Sea-Level Study." Some time later, it was allowed into RealClimate's "2000 Years of Sea Level" at position 22. The second of Harvey's two points concerned the use of Tiljander:
The study has also been criticised on various blogs for using “one of the multiproxy reconstructions that employed the four (actually three) uncalibratable [edit] Tiljander lakebed sediment data series” e.g. http://amac1.blogspot.com/2011/06/tiljander-data-series-appear-again-this.html.[edit].
RealClimate's moderators snipped the comment as shown.

Prof. Mann offered this inline commentary --
[Response: No. Just more of the usual deception from dishonest mud-slingers. More on that in short order. -Mike]

Tuesday, June 21, 2011

The Tiljander Data Series Appear Again, This Time in a Sea-Level Study

At RealClimate.org, Stefan Rahmstorf has written "2000 Years of Sea Level" about a study published on June 20, 2011 in PNAS. Andrew Kemp and co-authors BP Horton, JP Donnelly, ME Mann, M Vermeer, and S Rahmstorf reconstruct sea levels from 500 AD to the present, and relate these levels to the temperatures of the past, using a multi-proxy reconstruction that was first presented in Mann et al. (PNAS, 2008). (The Kemp11 PDF can be downloaded at the RC post.)

It turns out that the chosen temperature recon is heavily dependent on the four three uncalibratable Tiljander data series. This reliance grows stronger as one goes back in time, and shorter (younger) records "drop out."

I tried to leave a remark on this subject at RealClimate.org. Apparently, that site is set to automatically fail any comment tagged with my user name, email, or IP address. Here is the local copy of what I submitted (21 Jun 3:50 PM EDT) --
I was surprised at the provenance of the paleotemperature reconstruction that was used in Kemp et al's Fig. 2A and Fig. 4A. According to Fig. 2A's legend, it is "Composite EIV global land plus ocean global temperature reconstruction, smoothed with a 30-year LOESS low-pass filter". The reference is Mann et al. (2008). In that paper's S.I., the unsmoothed version is in panel F of Fig S6, as the black line labelled "Composite (with uncertainties)".

This is one of the multiproxy reconstructions that employed the four (actually three) uncalibratable Tiljander lakebed sediment data series.

According to Gavin Schmidt, "...it's worth pointing out that validation for the no-dendro/no-Tilj is quite sensitive to the required significance, for EIV NH Land+Ocean it goes back to 1500 for 95%, but 1300 for 94% and 1100 AD for 90%" (link). Further remarks on this issue as Responses to other RC comments here (see numbers 525, 529, and 531).

The incorrect inclusion of Tiljander could well make this EIV reconstruction progressively worse, as one goes from 1500 AD back to 500 AD. This might explain the increasing divergence between the temperature recon and the sea-level recon, as one travels back from 1100 AD to the beginning of the recons at 500 AD. This pattern is shown in Kemp11's S.I. Figs. S3, S4, and S5.

Did any of the peer reviewers comment on this issue, or request that you use a no-Tiljander temperature reconstruction?

Sunday, August 22, 2010

A comment on M+W10 submitted to RealClimate.org

I submitted a comment to the RealClimate.org post Doing it yourselves (20 August 2010), as the author makes some interesting remarks on the intersection of McShayne and Wyner (2010) and the Tiljander proxies. My comment entered the moderation queue last night after position #41, and wasn't among the ten comments that have been released in three batches this morning. Perhaps it has been failed, or perhaps it's being delayed. If does make a belated appearance (accompanied with inline commentary?), I'll note that in an update.

[ UPDATE 22 Aug. 2010 3:20 PM EDT -- In the past hour, my comment passed moderation, and was slotted into position #42 (the comment count is currently at 60). Gavin Schmidt's inline commentary is reproduced at the tail of this post. -- AMac ]

Monday, August 16, 2010

The Tiljander Data Series: Data and Graphs

I have compiled the information from the Lake Korttajarvi borehole varved sediments record that was characterized in Tiljander03, and then used in the multiproxy paleoclimate reconstruction Mann08.

The Excel file containing this data can be downloaded from this BitBucket.org archive. The name of the 1.5MB file is Tiljander-Mann08-proxies-data+graphs.xls. The name of the 1.8 MB file is Tiljander_proxies_dataset_graphs.xls .

Some observations and some graphs follow.

Part 2: Synopsis of some Tiljander-related arguments

This post is the continuation of a discussion on the Tiljander proxies that took place in the comments thread following the Aug. 1, 2010 Climate Audit post The No-Dendro Illusion.

Part 1 is here. As with that post, I may clean up formatting and grammar here, without notice.