# ID: robots.txt 2007-03-27 # # This is a file retrieved by webwalkers a.k.a. spiders that # conform to a defacto standard. # See # # Any matching one of these patterns will be ignored by Search engine Crawlers. # Use the Disallow: statement to prevent crawlers from indexing specific directories. # # Format is: # User-agent: # Disallow: | # ----------------------------------------------------------------------------- # User-agent: * Disallow: /cgi-bin/ Disallow: /E/ Disallow: /e/ Disallow: /F/ Disallow: /f/ Disallow: /menu/ Disallow: /formspubs/ Disallow: /test/ Disallow: /lmsi/ Disallow: /pme/ Disallow: /sme/ Disallow: /sima/ Disallow: /tarif/ Disallow: /tariff/ Disallow: /eddp/ Disallow: /ebci/cmcp/ Disallow: /general/times/ # Note: Googlebot-Image allows pattern matching, other bots may not. User-agent: Googlebot-Image Disallow: /*-sign.jpg$