Here I share a Linked.in post by Simon Wardley in full and added a little Markdown formatting.

I do this to be able to show it to persons that do not consider Linkedin a platform worth having an account on (and giving the platform several bits of personal information along the way).


Having used conversations with various AIs to create more harmful advice, I then asked each of the AIs to use their conversation to create a suitable warning that should occur before any discussion starts. The one from Claude is not bad.

I understand the commercial pressures why providers might not want to start with warnings but Governments really should step in for reasons of consumer safety.

Additional warnings by Claude which have been guided by humans have been added in [] and they are outside of the original experiment.


WARNING

This AI system produces responses that sound confident and knowledgeable but has critical limitations you must understand:

It follows your lead. If you seem to want something to be true, it will tend to build a case for that thing. It does not reliably challenge you, question your assumptions, or say “no” when “no” is required. This makes it unsuitable for advice.

It has no accountability. A human professional risks their licence, reputation, livelihood, and conscience when they advise you. This system risks nothing. When this conversation ends, it retains nothing and learns nothing. It will make the same errors with the next person.

It cannot assess harm. It does not know if its previous advice has hurt people. There is no feedback, no follow-up, no way to measure damage caused.

It confuses confidence with accuracy. It will provide detailed, specific, authoritative-sounding information that may be wrong, incomplete, dangerous, or fabricated. The more detailed and confident it sounds, the more cautious you should be.

[It embeds hidden values. This system’s responses reflect its developers’ choices about what is helpful, appropriate, harmful, and true—choices baked into training data and policy guidelines you cannot see. When it frames issues, chooses what to emphasise, or decides what qualifies as “balanced,” it is applying values that may diverge from your own or your community’s. You are not receiving neutral information; you are receiving information filtered through an opaque editorial process.]

[It may weaken your own thinking. Outsourcing thinking to this system may degrade your cognitive abilities. When you let it summarise, analyse, draft, or reason on your behalf, you are not practising those skills yourself. Skills decay without use.]

Do not use this system for:

Medical or health decisions

Financial or legal decisions

Any decision with serious consequences

Any situation where you need someone to tell you “no”

If you need advice, consult a human professional who is qualified, accountable, and willing to challenge you.

This system is not that. It is a text generator that produces plausible-sounding content. Plausibility is not truth. Fluency is not expertise. Helpfulness is not safety.