People should be celebrating the fact that he wants to make the algorithm public so there is transparency.
That is a lot more than we will ever get from Google/YouTube, Facebook, etc.
The problem is the word "algorithm" is thrown around like its a super secret formula when it comes to tech.
In reality, there are few true algorithms and instead, the casual word "algorithm" being regularly mentioned simply refers to "how the system works".
I mention this to point out that revealing the algorithm (or code) itself is unlikely to explain or divulge anything of importance, because an algorithm or source code is only as good as the data it collects, uses and/or learns from.
For example, imagine if the police were told "a man with green hair robbed a bank". You might think "very few people have green hair, we just need to look for anyone with green hair in the area."
However, now let's say at the same time of the bank robbery, there was a punk rock concert in town and hundreds of people attending had green hair. Suddenly, that "system" for finding the bad guy is no longer viable.
The problem with the systems that sites use to auto-moderate or at least auto-flag-content-for-review is they are only as good or bad as the data it has access to and the number of datapoints it has to check.
The other, and more significant flaw in moderation algorithms is they can be heavily influenced by intentionally or mistakenly entered configuration data.
For example, let's say I setup CowboysZone to trigger moderation checks for the word "homer". My intention may only be to have the staff look at any post with the word "homer" because sometimes posts with that reference in them are part of personal insults or attacks. However, suddenly a discussion about "Homer" from
The Simpsons starts triggering moderation requirements as well as any mention of anyone named Homer or even lighthearted self-labeling use such as "we have our homer favortism".
Now, your first thought may be, "Any moderator who looks at a post about Homer Simpson would know it's okay", but you would be wrong. A site like Twitter has many different moderators as well as many different levels of moderators including many who have likely been told if they approve something that is later removed, it will impact their performance review. What do you think those moderators are going to think and do? They are going to think, "The system would not have flagged it if it was okay and it is better to be safe than sorry so I will confirm the post deletion and user warning."
I posted all of this to say that sharing algorithms, even open sourcing them, is pointless. What they need to share is what content they are censoring and why they are doing it.