Amazing that getting some press on this issue, which is very, very old, took some former Googlers who are trying to compete with their ex-employer. Monopoly in search does rely in part on cooperating websites, who allow Google to crawl but block others. Why. Because Google pays them or helps them get paid. Or at least, Google is the only way the website operator believes her site can be found.
If all websites offered sitemap.xml files (links to these files are found in robots.txt), then "crawling" would in many cases be unnecessary.
"What do you think?" is the starting point to lead the discussion. I think is ok to further elaborate as long as not manipulate to bias the origin purposely. Just my personal POV, be flexible, no hard feeling.
Thanks for pointing out. In fact, I read it many time before I start to submit, but can't really find any statement mentioned that ONLY "original titles of the articles" is allow.
If all websites offered sitemap.xml files (links to these files are found in robots.txt), then "crawling" would in many cases be unnecessary.