oh, I didn't noticed that at first, but you are right.
What I did noticed is so many fast videos right next to text. I didn't even bother to read it (without firefox read mode) because it makes me a bit dizzy.
Even if my vision is okay, it still feels like a slap to the face. Can't take some time to make sure the most important part of the page is readable by all.
I specifically have issues with strong back lighting (genetic cataracts suck - I’m only in my 40s) so bright white page and light writing is super frustrating. Dark mode is my best friend.
The air Canada chatbot that mistakenly told someone they can cancel and be refunded for a flight due to a bereavement is a good example of this. It went to court and they had to honour the chatbot’s response.
It’s quite funny that a chatbot has more humanity than its corporate human masters.
Not AI, but similar sounding incident in Norway. Some traders found a way to exploit another company's trading bot at the Oslo Stock Exchange. The case went to court. And the court's ruling? "Make a better trading bot."
More to the point, most humans loudly declare they would do the right thing, so all the chatbot’s training data is on people doing the right thing. There’s comparatively fewer loud public pronunciations of personal cowardice, so if the bot’s going to write a realistic completion, it’s more likely to conjure an author acting heroically.
That's what people aren't understanding, it seems.
You are providing people with an endlessly patient, endlessly novel, endlessly naive employee to attempt your social engineering attacks on. Over and over and over. Hell, it will even provide you with reasons for its inability to answer your question, allowing you to fine-tune your attacks faster and easier than with a person.
Until true AI exists, there are no actual hard-stops, just guardrails that you can step over if you try hard enough.
We recently cancelled a contract with a company because they implemented student facing AI features that could call data from our student information and learning management systems. I was able to get it to give me answers to a test for a class I wasn't enrolled in and PII for other students, even though the company assured us that, due to their built-in guardrails, it could only provide general information for courses that the students are actively enrolled in (due dates, time limits, those sorts of things). Had we allowed that to go live (as many institutions have), it was just a matter of time before a savvy student figured that out.
We killed the connection with that company the week before finals, because the shit-show of fixing broken features was less of a headache than unleashing hell on our campus in the form of a very friendly chatbot.
That policy would be fraudulently exploited immediately. So is it more humane or more gullible?
I suppose it would hallucinate a different policy if it includes in the context window the interests of shareholders, employees and other stakeholders, as well as the customer. But it would likely be a more accurate hallucination.
Trust (If an AI is gonna have the level of insight into your personal data and control over your life, a lot of people will prefer to use a household name)
> Trust (If an AI is gonna have the level of insight into your personal data and control over your life, a lot of people will prefer to use a household name.
Facebook itself still has a big problem with it's lack of youth audience though. Zuck captured the boomers and older Gen X, which are the biggest demos of living people however.
> Apple need India though. They’re moving a lot of their manufacturing there to derisk from a China
That creates obligations both ways. Put another way, Apple is an increasingly-major employer in India.
The real carrot New Delhi has is its growing middle class. The real carrot Apple has is its aspirational branding.
> they gave in to the CCP and always say ‘we obey the laws of the countries in which we operate'
Apple regularly negotiates and occasionally openly fights laws its disagrees with. This would be no different. Cupertino is anything but lazy and nihilistic. Mandated installation opens a door they've fought hard to keep shut because it carries global precedent.
I fear (Apple) will do something that allows the government to do what it wants (with a bit more work) without explicitly installing something.
For example, with the UK encryption debacle, Apple removed Advanced Data Protections (e2e encryption) for iCloud users in the UK. So users' notes, photos, emails are possibly open.
It’s just about incompetence really. The budget is meant to be highly secret. And they accidentally published their report early. Which would let some people benefit from in financially, but it’s also just very embarrassing for a government. Sometimes budgets contain info that is more valuable than this.
You can say they should fight the nazis anyway but making the argument that they should do this because Britain weren’t colonisers doesn’t make sense.
reply