Policy and Standards for Critical Discourse

A work in progress. I'll have to revise it somewhat. I knew (or I thought I knew) that the problem wasn't and isn't entirely a cultural one, but rather an issue of censorship and propaganda. The work should put a greater emphasis on how the latter are made to seem legitimate, e.g. vague, wishy-washy rules, a lack of accountability, social engineering, the exclusion of users, etc. Such deceit primarily exploits the good faith of the user, and while I've always been a critic of w:WP:AGF, I've probably been understating just how subversive it is. The essay is more or less on the right track, but it needs to be more compact and specific. I've added this qualification in case I don't get around to finishing it. One tends to beat around the bush somewhat when criticizing the rules themselves.

The Socratic method begins not with a finished work, nor a question, but with with an assertion. While many people and organizations pay gratuitous lip service to critical thought, one may find that critical discourse is looked upon with far less enthusiasm. Yet perhaps the greatest asset of western culture itself is that liberty and knowledge are valued over saving-face, and that being wrong is not a permanent black mark upon one's reputation and credibility, but something transient and inherent to growth. In the hopes of encouraging healthier social expectations and habits, my current proposed guidelines are stated herewith:

Such explicit clarification is both necessary and within the rights of users to expect. Even minor features in the wording of policy and design of a website can undermine normal social transmission. The UCoC states "Criticism should be delivered in a sensitive and constructive manner." Qualifiers like "sensitive" and "constructive" seem to serve no purpose, as all contributions are required to meet common standards of decency and quality. To single out critical contributions as subject to additional yet nonspecific qualification seems unnecessary and makes them easily abused. A user who contributes in good faith would welcome feedback and would rather be treated with honesty than be patronized or not receive any feedback at all. These qualifiers seem largely subject to how others respond after the fact. I suspect many users would withhold their well-meaning critique unless reassured that it is welcome and will not be taken as an insult. In the quoted sentence of the UCoC, there's a tacit suggestion that criticism should be taken personally, but that in doing so one will not lose face if the policy is obeyed. Work and author are conflated rather than distinguished from one another. This expectation undermines discourse and intellectual growth itself. Users are conditioned to expect ego-sparing feedback and are to some extent relieved from being accountable to objectivity, but psychologically surrender their entitlement to honesty from others and their right to speak honestly themselves. The rest of part two of the UCoC seems wishy-washy, banal and vaguely irritating or at best redundant with the remainder of the UCoC. Statements like "Be ready to challenge and adapt your own understanding, expectations and behavior as a Wikimedian" and "Practice empathy" are nonspecific and obviously outside any given project's authority to enforce uniformly. AGF, or "Assume good faith", is only enforceable to the extent that we say what we assume, so the rule could be equivalently stated as "do not question the motives of others." Without euphemistic phrasing that uses adjectives like "good" and "faith", the rule sounds exactly as Orwellian as it is. To demand honesty, decency and the acceptance of criticism is entirely reasonable, to demand credulity is not. The UCoC emphasizes the importance of civility ad nauseam, yet nowhere does it ask for honesty. It does tell users to "strive for accuracy", to its credit. Accuracy is not always compatible with neutrality, however.
 * In contributing, one acknowledges that their contribution is subject to scrutiny by others and shall welcome critical comments and reviews.
 * No work or idea shall be protected from criticism.
 * Critique is best written in plain, natural language and it is undermined when it must be qualified with flattery or apology, or shoehorned into an unnatural, clinical, deadpan style.
 * Aim for accuracy, honesty and objectivity, not necessarily neutrality.
 * Acceptable contributions shall remain visible with respect to their original context.
 * There shall be a searchable public record of all issued blocks/bans, which records 1) the offending contribution 2) the specific (official) rule or policy broken. Interested users should be able to traverse and download this record in its entirety for the purpose of research, journalism, or personal interest.
 * For websites like Wikipedia where content itself is subject to user agreement, plurality/majority voting should be determinant rather than consensus. The visibility of discussions and comments themselves should not be subject to voting in any case.

This biasing of social transmission is the status quo not just on Wikimedia's projects, but on many other websites and in many physical settings as well. Reddit uses upvotes and downvotes, but only one counter rather than having a separate number for "dislikes". Presumably this is simply the sum, with downvotes decrementing and upvotes incrementing. One cannot tell whether or not they are alone in agreeing or disagreeing with any given reply or post. Replies that are downvoted are generally given less visibility or hidden altogether, and whoever made them may fall below a minimum "karma" threshold and be excluded altogether. Content visibility is dependent on this number rather than degree of interest per se, e.g. as represented by the frequency of replies in a traditional forum. This favors the social transmission of assent over that of dissent, as well as discouraging active debate and consensus building itself by allowing one to transmit their approval or disapproval without actually articulating their feedback. The official help page states " Upvotes show that redditors think content is positively contributing to a community or the site as a whole. Downvotes mean redditors think that content should never see the light of day. " Not just a little Orwellian. Notice though the shifting of responsibility here, whereby the expression of disapproval is taken to be an endorsement of censorship. Places like Reddit, Quora and StackExchange more closely resemble a news or media website having a comment section added as an afterthought than a discussion board. On reddit, a back-and-forth conversation longer than a couple replies or so is automatically truncated with most of its contents hidden behind a "view more of the discussion" button. There is little continuity in the structure of a given discussion topic or thread. In the case of Quora and StackExchange, the pretense of conversation is more or less abandoned altogether in favor of a format where you ask a question and someone tells you "the answer". Yet despite all this so much commentary about polarization and social media asserts that users of such websites are at fault because they ignore information they don't like and tend to cluster within "safe spaces" or "echo chambers". If one's contribution tends to become invisible everywhere but such an "echo chamber", then obviously an echo chamber is the only venue one can air it. Some time ago, youtube changed so that viewers only see the number of upvotes, not downvotes. Apparently the uploader can see both, so it has the effect of preventing the transmission of disapproval between viewers. Both youtube and reddit (and probably others) implement "shadowbanning". When a user is shadowbanned on a given channel or subreddit, their comments (including all future comments) become invisible to everyone but themselves and moderators, without sending any notice to the user. This feature is openly contemptuous of common decency, discourse, and freedom of expression themselves. A user may be allowed to go on for quite some time without realizing they're talking to nobody. More than anything else, this is a reflection on the character of the website's owners. It speaks to a great lack of integrity and decency. Think twice before using any site that supports this feature. Youtube's comment system is particularly bad. In addition to everything else I've already mentioned, it apparently implements a very broad automatic censor. Many seemingly appropriate comments are either suppressed entirely or, oddly, made visible only when the comment section is sorted by "newest first" rather than the default "top comments" sorting order. Since the comment section may be quite large, this makes responding to such comments extremely inconvenient. One must then scroll down through every other comment that has been in the meantime in order to reply to a comment that the censor puts in this bizarre state of limbo. The position of one's own comments as they appear to others is not reflected by how they appear when one is logged in, in which case they appear to be right at the top even though they may be buried from the viewpoint of other users. Buried comments are for all intents and purposes censored as it's very tedious to traverse so many other comments to reach them. It may be the worst comment system on the internet. Finally, I'm not very familiar with twitter but forcing a limit of 280 characters would make anyone sound glib. In all cases though, "voting" factors largely. Clearly, policy and website design are at least partly at fault, in addition to overt censorship. When one objects to the latter on any of the popular "social" websites, there always seems to be someone around to say "it's their right to censor people". This is a dishonest argument. That something is presently legal does not mean that something is moral or in the public interest. These website serve millions of people and clearly should not be allowed to manipulate public discourse. Freedom of speech is not merely a law but a principle.

Many venues encourage a culture of obsequiousness and consumption while discouraging argument and discourse. The latter appear to have fallen by the wayside and in their place we have mass media, including a handful of websites and services whereby "content" is delivered at a high frequency to a gushing, fawning audience. Information that is misleading, subversive, promotional or otherwise non-dialectical is essentially crafted to fool the consumer of that information. When is eristic juxtaposed with dialetic, people will nearly always recognize and prefer the latter. Both are better off if they're well-developed, but eristic relies on presentation and pomp to a far greater degree than does dialectic. This is why eristic must be developed in private and then delivered only when it is well-complete and with minimal chance of scrutiny. Mistakes, disagreements, debates, edits, refinements, corrections and scrutiny are all part of any social dialog or intellectual endeavor. Websites like youtube and reddit are in fact quite poorly suited for discourse by design. Even a traditional, run-of-the-mill forum supports more coherent back-and-forth conversations between users. While not a forum per se, Wikimedia projects like Wikipedia also support (in a technical sense) a coherent discussion e.g. on a talk page. Besides the prophylactic social engineering that aims to make dissent seem unfashionable or the design decisions I've covered above, more nuanced strategies are needed on a site like Wikipedia where users actually can communicate and are likely to be more competent in general. On Wikipedia there is a significant amount of additional policy besides the UCoC. Most of the official policy would be fine, if only it were actually followed and enforced impartially. Users with authority are often the worst offenders, generally acting with impunity. In addition to the "official policy", there are a large number of essays. While they all bear a disclaimer at the top assuring the reader that they're not actually policy (only that "some represent widespread norms"), many of them are de-facto policy. In other words, the less PR-friendly, less consistent, self-contradictory rules that they impose upon editors go here. Common accusations against users expressing minority opinions are Wikipedia:Disruptive editing, Wikipedia:Tendentious editing, w:Wikipedia:Don't bludgeon the process and Wikipedia:Here to build an encyclopedia. After user B makes an argument that user A does not want to hear, A can try several things. Flopping is a frequent occurrence. User A provokes user B for a while, B gets fed up and angry with A, A flops like a soccer player. They pout and complain bitterly about B's incivility, how disruptive B has been, how B is entirely incapable of collaboration, how they're just not here to build an encyclopedia. Experienced users know not to take the bait, but new users are frequently had this way. Another frequent tactic is the intentional misreading of B's argument by A. This can go a couple of ways, usually over the course of one or two dozen comments. If B gets angry then A will flop as before. If B lets it ride then A goes on ignoring B's point. If B corrects A, then B is accused of ignoring A's point. If at that time B restates their argument in a different way so that A might understand it, A accuses B of bludgeoning. If B entertains this accusation of misconduct the conversation is now entirely removed from the substance of B's argument. If B points that the A still hasn't addressed the point, then A dramatically declares that B is just there to argue, a 'troll' who set out just to waste A's time and that no amount of logic or reason could persuade B. WP:NOTHERE is invoked, B is sanctioned (blocked, often indefinitely), and then A goes about replying to all of B's other comments and often bastardizing as much of B's unrelated work as they can get away with. Most often though, B will lose patience long before it gets to that point. This is a typical or perhaps anecdotal, (but abbreviated) example of how an argument might play out fully. It sounds like a comedy routine but when you see it happen it leaves a far more disturbing impression, which grows stronger whenever one considers or is reminded of Wikipedia's influence. Note that the first bullet point would preclude such bogus accusations of bludgeoning. If A replies to B's point, then B may reply to A's point and vice versa. That's what a discussion is. Some of these policy-essays aren't even compatible with the official policy. Consider this part from w:WP:NPA, which is official policy: "[...] but some types of comments are never acceptable: [...] Comparing editors to Nazis, terrorists, dictators, or other infamous persons.". Yet w:WP:NONAZIS is frequently used to do just that. I'm sure some on the receiving end deserve their block, but in any case "nazi" is a defamatory label of judgement. Clear cut violation of unambiguous official policy, yet on the talk page it's endorsed by a list of users, many of which are sysops or have other privileges, though I didn't go through the whole list. An admin or sysop should not and need not cite such an essay. Essays like w:WP:NOTGETTINGIT and w:WP:NOTHERE are often used as a perfunctory catch-all in place of specific citations to official policy. Lacking a substantial history of real behavioral problems, persona non grata will very likely be blocked on grounds of WP:NOTHERE, even if they've made good contributions in general. These pseudo-policy articles wouldn't be regularly cited by sysops/admins if the administration didn't fully condone the practice. I've entertained the idea of starting an RfC on meta about it. Pointing out such hypocrisies doesn't usually endear one to sysops and admins (much less the signatories I mentioned above, some of whom seem especially vindictive toward anyone who explodes their half-baked rhetoric and frequent realizations of Godwin's law) but it's still the right thing to do. Again, I'm sure most (but not all) of the users who are blocked on grounds of NOTHERE or NONAZIS well deserved it, but I'm equally confident that those who deserved it had violated some official policy as well. I'd even prefer they change that part of WP:NPA to say "it is, however, fine to call someone a nazi if you feel the situation warrants it" and make WP:NONAZIS and WP:NOTHERE official policy, rather than treating official policy as if it were nothing more than marketing material while enforcing a different set of rules that they don't fully own up to. That would at least be honest, albeit stupid. It would also undermine the moral alibi of those who call people they disagree with "nazis" while still acting like they're taking the high road and not just mudslinging.

While I've never actually seen someone cite w:WP:CAPITULATE, it a fair representative of the attitude many privileged users will take if they don't like what they're hearing. There are many grotesqueries to observe here and the general message is a middle finger to editors who prioritize objectivity and fairness above political expedience and saving face for authority, metaphorically comparing them to cancer that must be excised. Let's start at the top though. The first sentence reads "Wikipedia's administrative processes are entirely geared to protecting project stability, not toward individual "justice", a "fair hearing", or "proving who is technically in the right"." Already we have a presumption that justice and fairness are a threat to "project stability". It does not state this presumption explicitly because it would sound ridiculous and be impossible to justify. They can only hope the reader takes this presumption on faith and doesn't stop to think about it, otherwise all of the essay's question-begging becomes obvious. "This is a marked difference from the approach taken by Western, democratic legal systems, especially common law systems." In other words, the reader is not to expect any uniform standard of western principle at all. Western justice, gone in the first two sentences. "It's a collectivist approach that supports the principle that the needs of the many outweigh the desires of the one." Ah, it's for our own good. How reassuring. Thus ends the first paragraph, with the reader hopefully not asking too many questions and with lower expectations than they started. The second paragraph is more of the same, but with all the assumptions set we get into the meat of the essay, a gross caricature of anyone who does defend principle, and what such a person should expect for their trouble; "You will be sanctioned for habitually badgering others to satisfy your petty demands, being excessively individualistic at the expense of others, excuse-making or finger-pointing at others, nit-picking, clearly trying to just "win" at all costs, stubbornly "not getting it", dragging out conflict just to make a point, or waging a petty "righting great wrongs" micro-crusade for personal honor that no one else cares about. " So this is our stock character, our archetype. A badger, nitpicker, someone who's stubborn, clearly trying to win at all costs for things no one cares about. In other words, someone annoying and selfish. A fragile and desiccated strawman indeed. "Those who really are here to build an encyclopedia have one expectation of disputes: that they quickly resolve (or dissolve) with a result that is acceptable to the consensus of the editorial community so that collegial collaboration resumes." True Scotsmen fall into line, apparently. What complete nonsense. Why can't disagreements be collaborative and collegial? Is the Socratic debate not an attempt to build consensus and arrive at truth? Moreover, why must they be collaborative or collegial at all? Disagreements, discourse and debate should be judged chiefly by how well a given point is supported by reason, morality and logic, not by its civility or by consensus. If consensus starts to converge on a falsehood, isn't that exactly when one should stand against the consensus? "If you are here for advocacy or activism – for outing The Truth – then you are making a mistake and will be ejected when others realize it." This sentence establishes the subtext of the essay, a tacit conflation of dissent and proselytism. This is a deeply subversive idea because there are many illegitimate propagandists but the accusation is just as easily made (if not more easily made) against someone with dissenting viewpoints that are truthful. It's interesting that the author considers 'activism' as grounds to summarily banish any given user. This seems like a Freudian slip to me. Is editing Wikipedia not ostensibly a form of activism in and of itself? Why would anyone contribute if not to make information available to others and why should anyone add information that isn't true? I don't intend that question to be rhetorical. I understand that advertising, promotional material and unsupported information are prohibited on Wikipedia, rightfully so, but why don't they just say that? Truthful activism is a different thing entirely. Some of those phrases are also links. "The Truth" links to w:Wikipedia:Verifiability, not truth, which itself supports both the shortcuts WP:TRUTH and WP:NOTTRUTH. It seems easy enough to make the point just by stating that editors cannot add information that isn't credibly supported, or something to that effect. Why malign "truth" itself? While w:WP:NPOV does not appear to be linked directly from that essay, it is one of Wikipedia's core policies and cited by several of the pages linked, as well as being frequently referenced in general. Perhaps the most salient question that occurs to me: why must one aim to be neutral and not accurate or honest? Now for the climax of the essay... "Administrative enforcement on WP necessarily takes this approach to recalcitrant hotheads, because the very act of arguing ad nauseam, to defy the collective peer pressure of the editorial community telling one to change one's ways, is considered disruptive in and of itself. The community, and in particular the administrative and arbitration corps, care primarily about the functioning of the Wikipedia "organs", like content creation and source checking; any individual cell (i.e., you) causing inflammation, for whatever reason, is a cancer to be removed. It can take a long time for some editors to internalize this and adjust, especially if they're used to rancorous debate on online forums." It ends on an unlovely Hobson's choice. "Either one gets it, eventually, or one is shown the door." I think the whole thing would make a great fundraising banner. Perhaps the most contemptible feature of that essay the caricature of those who will stand on principle, making them out to be neurotic, disruptive and a threat to order. This is common in propaganda. It's natural that the essay dispenses with western justice, truth and the Socratic method, as it's making the case for something quite akin to despotism. Most people understand that life is sometimes unfair, but why discard the principle itself? Acts of god, accidents, disease. These forms of unfairness we must accept, at least expostfacto. Conversely, we need not and should not accept this question-begging nonsense from people who simply don't want to be held to common standards of decency. It is precisely our collective expectations of common decency and fairness that maintain these social norms, which are debased when we allow ourselves to be convinced that such expectations are selfish. Theses cultural traditions are far more valuable than Wikipedia. They are not an existential threat to the project, but even if they were it would be an easy decision discard the project before the essential principles of our culture. There are lots of different policy-essays on wikipedia but none so accurately represent the nastier elements of Wikipedia behind all the wishy-washy marketing. In general, privileged users and regular users often speak of argument and discourse as though it were an unproductive venture altogether, as a feature of behavioral problems, anti-social, or even more desperately, as a undue hosting expense. Those users are usually the ones who endeavor to make it so. Anyone can say I see nothing of value here, you've made no productive contributions, etc. without addressing the point. These are all facile and dismissive boilerplate responses that anyone can deliver with an air of officiality.

In contemporary culture there is, generally speaking, a conflation between speaking one's opinion and proselytism. The word "opinion" is nearly a pejorative. It is used as though opinion were at best self-interested, unfairly biased, non-pragmatic, or "religious", so to speak. This devaluation of politics and opinion is subversive because it encourages a culture of moral and political quietude, which is very favorable for the status quo. If it is unfashionable or against a given policy to make an argument in the first place, then one's argument can be written off without a counterargument and without any further inquiry or discourse. Evasiveness, non-arguments and censorship should be recognized as such, and not considered fashionable or socially correct. Consensus is also rather partial to the status quo. The thing is, there's not really a policy of consensus on wikipedia. As evidenced above, users are expected not to argue against the "consensus", yet this isn't really a consensus process at all if minority opinions are expected to yield to the majority. It would be fine if the policy required majority voting and applied this process consistently, and in fact it would be better than consensus. Yet the problem is that it doesn't: If the majority is in favor of something that the small cadre of regular editors oppose, those editors will veto it on grounds that there is no consensus. You'll notice that the official policy does have a very specific definition of consensus, and it's this vagueness along with the strongarming of users with minority opinions (to maintain the pretense of "consensus") that allow the small core set of regular users to essentially do whatever they want. Vagueness in policy and the absence of proper record keeping makes it difficult to determine why a user was blocked. Sometimes it's obvious, other times it requires quite a bit of investigation and looking into old pages and archives to put the picture together. This is impractical, and it's this vagueness that creates enough ambiguity so that users can be blocked for pretty much any reason. I've suggested a better record keeping process on WMF policy talk pages and elsewhere, which would be trivial to adhere to.

These principles are all self-evident, based in the public interest, common decency and common sense. The owners and administrators of any given website are ultimately who decide policy and design. They can allow discourse to proceed freely or they can choose to interfere with it, but they cannot exploit the trust of well-meaning users by saying one thing and doing another without being in obvious contravention of these principles.

A related essay on my meta userpage: https://meta.wikimedia.org/wiki/User:AP295

My suggestion for record keeping: https://foundation.wikimedia.org/w/index.php?title=Policy_talk:Universal_Code_of_Conduct&oldid=394252#Admins/sysops_issuing_a_block_should_be_required_to_cite_the_offending_diff(s)_and_the_specific_(official)_rule/policy_violated_in_the_block_log_message

The above get closer to the core problem than this essay and I may integrate them and/or rework what I have here, which is on track but I need to put a greater emphasis on the fact that it's vagueness/looseness of policy that enables abuse. Policy should be structured such that it makes abuse obvious. Investigating the policy features that enable abuse and the policy features that deter it was and is my intent for this essay, which started off as a frustrated observation of the abuse I've observed (and received). Yet there's also something to be said for the biasing of social transmission that results from certain design decisions. I'm not sure if I want that to be a separate essay or not, since they're somewhat related. It seems that the general relation is that bias against dissent lends legitimacy to abusive management and biased or subversive content and minimizes criticism.

AP295 (discuss • contribs) 15:28, 10 October 2023 (UTC)