I don’t have my own instance or server, so I don’t know, but it would be interesting to know.
Since communities are viewable by anyone without an account, including search engine crawlers, this is the case by default. It is then up to search engines to crawl them and rank the appropriately.
A major problem right now is that search engines down rank massively pages with duplicate content, and that’s the case with most Lemmy instances because of federation. If the fediverse ever becomes large enough to matter, they will maybe change that, but currently finding things on the fediverse is not exactly a good time.
Duplicate content shouldn’t be a problem as every post has a source URL. This is linked in the HTML head as the canonical URL. That way search engines know where something is from and that only that one is the true source.
Except lots of people post the exact same thing to every community with a related name across many instances.
I mean, do they? Do the search engines do that? I don’t know that they do. They could, but why spend the time making that?
That’s standard HTML stuff available for decades.
Semantic html is largely ignored by search engines. If you’re talking about the source tag, it does not syndicate, at least on Google.
If you’re talking about iframes, Lemmy does not use them. The content appears as though your home instance hosts it (hence why images need to be moderated off-instance so badly).
Google supports rel canonical link annotations as described in RFC 6596.
Yeah, its a bit tricky and react is not supper SEO friendly. But occasionally google indexes some pages it can find. ( Or at least the front page )
If you submit it to the engine it should crawl it.
Other comments are good answers. I would like to add that search engines including google appear to be struggling to index the entire lemmyverse because we generate more content that they process. With time, the priority we are given will increase until search engines figure out lemmy is top-quality content