Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I keep coming back to this point. Lots of jobs are fundamentally about taking responsibility. Even if AI were to replace most of the work involved, only a human can meaningfully take responsibility for the outcome.


If there is profit in taking that risk someone will do it. Corporations don't think in terms of the real outcome of problems, they think in terms of cost to litigate or underwrite.


Indeed. I sometimes bring this up in terms of "cybersecurity" - in the real world, "cybersecurity" is only tangentially about the tech and hacking; it's mostly about shifting and diffusing liability. That's why the certifications and standards like SOC.2 exist ("I followed the State Of The Art Industry Standard Practices, therefore It's Not My Fault"), that's what external auditors get paid for ("and this external audit confirmed I Followed The Best Practices, therefore It's Not My Fault"), that's why endpoint security exists and why cybersec is denominated not in algorithms, but third-party vendors you integrate, etc. It all works out into a form of distributed insurance, where the blame flows around via contractual agreements, some parties pay out damages to other parties (and recoup it from actual insurance), and all is fine.


I think about this a lot when it comes to self-driving cars. Unless a manufacturer assumes liability, why would anyone purchase one and subject themselves to potential liability for something they by definition did not do? This issue will be a big sticking point for adoption.


Consumers will tend to do what they are told and the manufacturers will lobby the government to create liability protections for consumers. Insurance companies will weight against human drivers and underwrite accordingly.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: