Skip to content

site discoverability/web crawlers/bots #180

Answered by ericsilvertx
vtp10 asked this question in Q&A
Discussion options

You must be logged in to vote

By design the web crawlers and bot looks for the robots.txt and then do accordingly. My interpretation is that there should be no restriction on who or what system can retrieve the files.

Replies: 1 comment 4 replies

Comment options

You must be logged in to vote
4 replies
@retjami
Comment options

@RLTx1391
Comment options

@rajkumarsowmy
Comment options

@shaselton-usds
Comment options

Answer selected by shaselton-usds
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
6 participants