You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have found that Bing/Yahoo/DuckDuckGo, Yandex and Google report crawl errors when using the default robots.txt. Specifically their bots will not crawl the the path '/' or any sub-paths. I agree that the current robots.txt should work and properly implements the specification. However it still does not work.
In my experience explicitly permitting the path '/' by adding the directive Allow: / resolves the issue.
More details can be found in a blog post about the issue here: https://www.dfoley.ie/blog/starting-with-the-indieweb
0 commit comments