They simply need to change the way relevancy is measured. They need to implement some mechanism that can evaluate the quality of the page. The algorithm should penalize sites that have content very similar to other sites (like those that scrape github or stackoverflow), low effort sites, or sites that are infested with too many ads.
And since so much quality information is in youtube videos, and they already generate transcripts, why can’t you search through those?
They simply need to change the way relevancy is measured. They need to implement some mechanism that can evaluate the quality of the page. The algorithm should penalize sites that have content very similar to other sites (like those that scrape github or stackoverflow), low effort sites, or sites that are infested with too many ads.
And since so much quality information is in youtube videos, and they already generate transcripts, why can’t you search through those?