News

§ 230 and the Preserving Constitutionally Protected Speech Act

My PDF testimony, as well as the testimony of other witnesses is available. I also thought it would be interesting to post the text. I have commented on five proposals separately so I thought I’d split this up. My plan was to provide an impartial analysis of the proposals. I will, however, be focusing on non-obvious consequences. Some of these proposals have influenced my opinions, however I’ll try to separate them from the objective analysis.

[II.]Constitutionally Protected Speech Act

There are many provisions in this bill.

[A.]Legalizing State Civil Rights Laws that Ban Political Discrimination

The bill would change § 230(c)(2) to provide (in proposed new § 230A(a)(2)) that,

Covered companies do not allow interactive computer services to be provided by any provider. [basically, a large social media platform] shall be held liable on account of … any action voluntarily taken in good faith to restrict access to or availability of material

This isn’t constitutionally protected.

However, the current (c.(2)) version closes with

The provider/user considers the material to be offensive, lewd or lascivious or filthy or harassing.

The bill, which is simplified, would clarify that platforms don’t have federal immunity when they block material constitutionally protected that isn’t violent or sexually explicit. If they so choose, states could limit the ability of platforms to remove users and posts based on their political views (or religious or scientific ideas) and/or other information. While the bill doesn’t ban this type of political discrimination directly, it does allow for states to do so.[1]

I recently wrote about whether such social media platform bans against political discrimination is constitutionally protected under the First Amendment.[2] But the bill would make clear that § 230 doesn’t preclude such bans.

[B.]It is required that users “Knowledgeably and Willfully Choose”[] … Algorithm[s]Click here to Display Content

This bill would remove large platforms from immunity “utilize[] an algorithm to amplify, promote, or suggest content to a user unless a user knowingly and willfully selects an algorithm to display such content” (proposed § 230A(c)(3)). But computers can do almost everything they want via algorithm.[s].”

Any platform which amplifies or promotes content for a user must ensure that they “knowingly and willingly select” the platform’s algorithm. The platform might just ask its users for their input and prompt them to “Click Here to Select Our Algorithm For Suggestion Material to You.”[ing]Promot[ing]Or suggest[ing]”Any content to be displayed by a user, until clicked. If so, then that should be easy enough for the platform to do—though it’s hard to see how it would help anyone.

But, it doesn’t matter if a click does not count.[ing]An algorithm”, then it is difficult to determine what content platforms might suggest. They would have to offer at least two options so that the action is truly considered “selected.”[ing]”? Do they need to go through each algorithm, in order for it count as “knowingly selected”?[ing]”? They would have to do another thing? What benefit could that bring to the user? Given the language used, it’s difficult to say.

[C.]In the case of removal, it is mandatory that all platforms offer explanations and appeals.

It would also include (sec. (emphasis added)

Each company covered must put in place and continue to maintain User-friendly and reasonable appeals processes for decisions about content on such covered company’s platforms….

For any content a covered company edits, alters, blocks, or removes, the covered company shall— …

It is clear state why such content was edited, altered, blocked, or removed, including by citing the specific provisions of such covered company’s content policies on which the decision was based ….

Sec. Section. However, the bill is unclear about what “reasonable” appeals are, which appeals are “user-friendly,” and “clearly statut.”[ing].” Let’s say, for example, that a platform states “we removed this material because it was inappropriate / hateful/ misleading / supporting violence.” Does that make sense? Or would it be necessary for the platform to explain where the boundary is drawn between art and pornography? The platform would need to clarify why it considers a statement “hateful” and “supportive for violence,” even though there are other meanings. The platform would have to justify why certain controversial materials were deemed “misleading.”

The bill says that an appeal must also “provide an opportunity to” [the]User must explain why they believe the action of the covered company should not be taken. This could include demonstrating inconsistency with such company’s particular content policy. The platform would then have to clearly state why this material is worthy of removal, even though it did not remove any previous material.[ly]Apply[ed]”? The way terms like clearly state are interpreted will have a significant impact on the amount of expense, litigation and deterrence that the proposal could bring about.

These provisions are not likely to be enforced by the FTC (sec. 203(a) and by the state attorneys general (or other executive officials) would enforce these provisions. 203(b), but not private litigants. As noted in Part II.A the bill allows states (1) to ban social media discrimination and (2) to allow private litigants to sue about such discrimination. If states did this, the transparency requirements could be used to help private litigants collect evidence that they had been discriminated against because of their political views.

[D.] “Conservative”https://reason.com/”Liberal” Accounts

See sec. The provision in sec. [content enforcement]Decisions related to “conservative content” and “conservative accounts”, and “to libertarian content” and “liberal content” will likely be unconstitutionally vague. The definitions of “conservative”, “liberal” and other terms are not well-defined. It’s difficult to envision how such a definition can be made in a clear way for legal purposes.[3]

[1]Even the 47 U.S.C. may not be enough. § 230(c) doesn’t stop states from banning platforms from removing posts based on the posts’ political views, see Adam Candeub & Eugene Volokh, The 47 U.S.C. § 230(c)(2), 1 J. Free Speech L. 175 (2021), https://www.journaloffreespeechlaw.org/candeubvolokh.pdf. This is just an option, and the courts are still divided on it.

[2] Eugene Volokh, Social Media Platforms are Common Carriers, 1 J. Free Speech L. 377 (2011), http://www.law.ucla.edu/volokh/pubaccom.pdf.

[3] Cf. Hynes v. Mayor & Council of Oradell425 U.S.610 (1976) (removing as unconstitutionally vague the requirement that door-to–door political solicitors register at the city prior to soliciting “for Federal, State or County political”. . . . Because “it’s not clear what this phrase means”