Hacker Newsnew | past | comments | ask | show | jobs | submit | jonplackett's commentslogin

Britain was still literally an empire at this point, currently colonising India and many other places.

You can say they should fight the nazis anyway but making the argument that they should do this because Britain weren’t colonisers doesn’t make sense.


Google really are a fully woken sleeping giant. More code reds being issued today I expect.

I wish website designers would remember that not everyone can see great. This text is so fine and light and they’ve also disabled screen reader

oh, I didn't noticed that at first, but you are right.

What I did noticed is so many fast videos right next to text. I didn't even bother to read it (without firefox read mode) because it makes me a bit dizzy.


Even if my vision is okay, it still feels like a slap to the face. Can't take some time to make sure the most important part of the page is readable by all.

I specifically have issues with strong back lighting (genetic cataracts suck - I’m only in my 40s) so bright white page and light writing is super frustrating. Dark mode is my best friend.

Have you tried Firefox Reader View? It allows you to set whatever text size and font is best for you.

Firefox reader view gives me better contrast. Also the text to speech mode works in reader view.

The air Canada chatbot that mistakenly told someone they can cancel and be refunded for a flight due to a bereavement is a good example of this. It went to court and they had to honour the chatbot’s response.

It’s quite funny that a chatbot has more humanity than its corporate human masters.


Not AI, but similar sounding incident in Norway. Some traders found a way to exploit another company's trading bot at the Oslo Stock Exchange. The case went to court. And the court's ruling? "Make a better trading bot."

I am so glad to read this. Last I had read on the case was that the traders were (outrageously) convicted of market manipulation: https://www.cnbc.com/2010/10/14/norwegians-convicted-for-out...

But you are right, they appealed and had their appeal upheld by the Supreme Courts: https://www.finextra.com/newsarticle/23677/norwegian-court-a...

I am so glad at the result.


Chatbots have no fear of being fired, most humans would do the same in a similar position.

More to the point, most humans loudly declare they would do the right thing, so all the chatbot’s training data is on people doing the right thing. There’s comparatively fewer loud public pronunciations of personal cowardice, so if the bot’s going to write a realistic completion, it’s more likely to conjure an author acting heroically.

Do they not? If a chatbot isn't doing what its owners want, won't they just shut it down? Or switch to a competitor's chatbot?

"... adding fear into system prompt"

What a nice side effect, unfortunately they’ll lock chatbots with more barriers in the future but that’s ironic.

...And under pressure, those barriers will fail, too.

It is not possible, at least with any of the current generations of LLMs, to construct a chatbot that will always follow your corporate policies.


That's what people aren't understanding, it seems.

You are providing people with an endlessly patient, endlessly novel, endlessly naive employee to attempt your social engineering attacks on. Over and over and over. Hell, it will even provide you with reasons for its inability to answer your question, allowing you to fine-tune your attacks faster and easier than with a person.

Until true AI exists, there are no actual hard-stops, just guardrails that you can step over if you try hard enough.

We recently cancelled a contract with a company because they implemented student facing AI features that could call data from our student information and learning management systems. I was able to get it to give me answers to a test for a class I wasn't enrolled in and PII for other students, even though the company assured us that, due to their built-in guardrails, it could only provide general information for courses that the students are actively enrolled in (due dates, time limits, those sorts of things). Had we allowed that to go live (as many institutions have), it was just a matter of time before a savvy student figured that out.

We killed the connection with that company the week before finals, because the shit-show of fixing broken features was less of a headache than unleashing hell on our campus in the form of a very friendly chatbot.


With chat ai + guardrail AI it probably will get to the point of it being sure enough that the amount of mistakes won't hit the bottom line.

...and we will find a way to turn it into malicious compliance where rules are not broken but stuff corporation wanted to happen doesn't.


Efficiency, not money, seems to be the currency of chatbots

That policy would be fraudulently exploited immediately. So is it more humane or more gullible?

I suppose it would hallucinate a different policy if it includes in the context window the interests of shareholders, employees and other stakeholders, as well as the customer. But it would likely be a more accurate hallucination.


Fonts are an absolute joke. Foundries must be run by the mafia at this point.

Either...

Better (UX / ease of use)

Lock in (walled garden type thing)

Trust (If an AI is gonna have the level of insight into your personal data and control over your life, a lot of people will prefer to use a household name)


Or lobbing for regulations. You know. The "only american models are safe" kind of regulation.

> Trust (If an AI is gonna have the level of insight into your personal data and control over your life, a lot of people will prefer to use a household name.

Not Google, and not Amazon. Microsoft is a maybe.


People trust google with their data in search, gmail, docs, and android. That is quite a lot of personal info, and trust, already.

All they have to do is completely switch the google homepage to gemini one day.


The success of Facebook basically proves that public brand perception does not matter at all

Facebook itself still has a big problem with it's lack of youth audience though. Zuck captured the boomers and older Gen X, which are the biggest demos of living people however.

> Zuck captured the boomers and older Gen X, which are the biggest demos of living people however.

In the developed world. I'm not sure about globally.


Apple need India though. They’re moving a lot of their manufacturing there to derisk from a China.

Also, they gave in to the CCP and always say ‘we obey the laws of the countries in which we operate’.

Apple is, at the end of the day, just a business.


> Apple need India though. They’re moving a lot of their manufacturing there to derisk from a China

That creates obligations both ways. Put another way, Apple is an increasingly-major employer in India.

The real carrot New Delhi has is its growing middle class. The real carrot Apple has is its aspirational branding.

> they gave in to the CCP and always say ‘we obey the laws of the countries in which we operate'

Apple regularly negotiates and occasionally openly fights laws its disagrees with. This would be no different. Cupertino is anything but lazy and nihilistic. Mandated installation opens a door they've fought hard to keep shut because it carries global precedent.


I fear (Apple) will do something that allows the government to do what it wants (with a bit more work) without explicitly installing something.

For example, with the UK encryption debacle, Apple removed Advanced Data Protections (e2e encryption) for iCloud users in the UK. So users' notes, photos, emails are possibly open.


> fear will do something that allows the government to do what it wants (with a bit more work) without explicitly installing something

Why this isn’t being done at the SIM/baseband level is beyond me.


"Leave us alone or we'll cancel our plans and move somewhere else"

But but but they ‘have a limited budget’ (repeated multiple times for effect in the article)

~£4mil budget and 50 staff

It’s just about incompetence really. The budget is meant to be highly secret. And they accidentally published their report early. Which would let some people benefit from in financially, but it’s also just very embarrassing for a government. Sometimes budgets contain info that is more valuable than this.

Turn out, no. Not they would not.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: