Any sort of content moderation is going to have to come from the platforms themselves because the First Amendment just makes it hard to for the government to be the arbiter of content. Then, of course, when they do it, it opens those platforms up to criticism from those who don't like being moderated.
It's this weird combination where our First Amendment allows us to say these things and is, in part, what keeps us from seeing some of the crack-downs that our countries have done on misinformation online . . . and that has allowed for the amplification of some pretty nasty and untrue ideas that can actually harm our communities. But that same wide scope of protection keeps us free to be critical of government, which is a critical feature distinguishing our society from more controlled, even autocratic nations.
It's a really challenging issue, and deep-fake AI-assisted content is only going to make it harder. Either the Supreme Court evolves our First Amendment standard to include some kind of exception for deliberately false, harmful information (but again, it's much easier to say than to faithfully perform) or we continue to have to live with this tradeoff.