Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Sure, it would make things hard for you, but again, if you were using any other sort of data, there'd be usage costs. The news folks are just figuring that part out now, while every tech-first company has that build into their business from the start.


Except for the fact that transmission/distribution costs are almost zero and 'news' is a broadly consumed information resource. It doesn't matter that there would have been usage costs in the past -- they would be silly now.


If every other API charges for usage, why shouldn't news sites?

If you want to index their stories, you can do 500 queries per day for free. After that, you pay, just like any other API.


But the API is HTTP and the news stories are syndicated across a hundred different sites. How do you limit the crawlers under this scheme? It seems like any serious attempt to limit crawling will require major software redeployment, cooperation of crawlers, widespread authentication, or some combination of these. Is there actually a feasible way to do this without breaking the web?


These are all good points. I don't have answers to any of them. Feasibility is a whole other issue.

My point is that if you look at online newspapers as online services, then they should be able to charge people for programmatic access to their service, just like any other tech service does through its API.

If I want to build an app on the back of Yahoo BOSS, I have to pay Yahoo.

If I want to build an app on the back of the New York Times, maybe I should have to pay the New York Times.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: