Should it be legal for social media to censor harmful misinformation?

Should it be legal for social media such as Facebook, Twitter or YouTube to censor (remove) what they consider to be harmful misinformation, e.g. that relating to COVID-19?

Arguments for

 * Each restriction of freedom must be balanced against the gain of harm reduction. In the case of medical information, the harm is significant.
 * The specific harm needs to be stated. For instance, if a president claims a certain drug helps against COVID-19, the adult citizen must know that the president is no medical expert and that they should consult either a medical professional or perform an independent Internet research in reliable sources. See also Mill's harm principle.
 * Mill's harm principle pertains to laws, not to private enterprises.
 * We need to think de facto, not de jure. If a private enterprise has the power to create an effect that is similar to the effect of a censorship by government, the situation is in one way even worse than the state censorship: at least, the politicians were elected, whereas the private enterprise was not.
 * There may be a collusion of big business with big government. We do not know.
 * That's a speculation.
 * That's an existing risk. A risk is not a thing that has actually already happened, but rather a concern that needs to be taken seriously.
 * If the official position is that vaccine against COVID-19 very well prevents spread of the disease, and in fact it does not, the result of the censorship is harm. Here, harm was not prevented by the censorship but rather caused. And this is all too typical of censorship: it is not a tool of harm prevention but rather of unaccountable power serving nefarious ends, all too often private corporate interest. Absent proof to the contrary, each corporation should be suspected to serve its own self-preservation and growth rather than the public interest, and the interest of allied corporations.
 * We may assume good faith rather than suspecting the worst.
 * Is assuming good faith the most epistemically adequate position? Thus, is it a position that is more likely to correspond to reality than not? At best, we should be agnostic about the good faith.
 * Corporations are groups of people and interests. They do not directly have faith or conscience. They are a special social phenomenon, somewhat tricky to analyze: they are like persons in some ways but not others. They are in any case aggregates and concentrations of interests. In one fairly fragile way of analysis, the interests gain personhood via corporations. This is not a perfect way of analysis, but points in the right direction.
 * Social media are private enterprises and are not subject to constitutional protection of free speech.
 * The question is not whether censorship by them is already illegal but whether it should remain legal or made illegal. For instance, a law could be made to make it explicit that each medium has to decide: either it is a censored publisher liable like traditional publishers, or it is a generic publishing platform or utility, not liable for uploaded content but also not allowed to censor the content except in so far as the content violates law. It is of note that e.g. book publishers have nothing approaching the oligopoly in their niche like some of the social media do, e.g. Facebook, Twitter and YouTube. They are dominant players in their domains, which is inconsistent with the genuine many-to-many ideal market of the theoretical economics where for each service provider there is an effective substitute, not just a substitute in principle but not so much in practice.
 * In the case that the misinformation could result in harm to other people, social media platforms have a legal obligation to prevent that harm.
 * Not obvious while not traced to a reliable source.
 * The question is not whether it is legal but rather whether it should be legal. By contrast, the argument not only claims that is already is legal but that it is legally mandatory.
 * The existence of legal obligation is jurisdiction-dependent, which the argument does not recognize; no jurisdiction is specified and it seems unlikely to apply to all jurisdictions.
 * The "could" standard is an extremely weak standard. What does that even mean? Could hypothetically but don't? It seem unlikely that any lawgivers would use such a weak standard to regulate speech.

Arguments against

 * The harm of censorship is greater than the good of harm prevention. Adults should be treated as adults. Controversial subjects, whether alleged conspiracy theories or medical efficacy claims, should be publicly discussed. The result of censorship is all too often not a protection of truth but rather a protection of official untruth, sometimes obvious untruth. Thus, the results of censorship is Orwellian in which people are supposed to pretend to believe an obvious untruth because it is official. It is deeply harmful to the spirit of free society.
 * Those who want to spread misinformation do not need to use these platforms; they may use other platforms.
 * That may be true in theory or in principle but not really in the practical effect. The real effect is one of stifling public debate by non-elected censors not accountable to the source of the power in the state, the people. There is in fact a similar problem with traditional media: they too have too much political power without being elected or accountable to the people.
 * The idea that the source of the power in the state are the people is a fiction.
 * The idea that the people ought to be the source is a norm, even if that norm fails to be met by reality.
 * When people who believe that drinking bleach is helpful for their health die, the population quality improves. We do not need that level of stupid citizens.
 * All human life is valuable, even those less gifted.
 * Someone has to work menial jobs that the people who think of themselves as intelligent do not want to work.
 * Some of the foolish do not die but end in a hospital, and if they have insurance, the insurance company has to pay for they occupy a scarce slot in a hospital that could have been used by someone else. The harm is not only to the foolish.
 * Some of the foolish have children, and when they die, someone has to take care of the children. The harm is not only to the foolish.
 * Alternatives to censorship such as labeling as potential misinformation should be considered. These could achieve some harm-reduction outcomes without stifling the debate.