Discussion
akagusu: AI will never be ethical because the copyrighted material used for training the AI without proper copyright payments is not only unethical but illegal.Unfortunately law enforcement decided the copyright law only applies to regular citizens like me and not to billionaires owners of AI companies.
Maxatar: The article immediately starts off with such a glaring contradiction that it makes it very hard to correctly interpret the remainder of it.You can't say that something can never be ethical/safe on the one hand, and then on the other hand say that being ethical/safe depends on context/intent. Those two statements contradict each other.Either AI can be safe and ethical in the right context with the appropriate intent which contradicts the title, or it can't be safe/ethical regardless of intent/context, in which case the title is correct but the reasoning is incorrect.There is no consistent way to interpret the remainder of the article with such a glaring and obvious inconsistency.
evnp: The point is that safety depends on context and intent being known - with unknown context or intent, dangerous situations will appear _some_ of the time, thus the system as a whole can "never" be fully safe.There is no contradiction here.
16bitvoid: I think they're arguing against Anthropic et al. claiming their models are "ethical" and "safe". The point being that it can't be absolutely in all circumstances ethical or safe because even seemingly benign information can be used to cause harm, hence it requiring knowing the user's intent to actually make an ethical and safe choice of whether to provide information or not.When Anthropic et al. say that their AI is ethical and safe, they are saying so in absolute terms, same as the title. Just one instance of unethical or unsafe behavior is enough to prove that it's not ethical or safe.No one would say a knife or a gun is safe because we're all aware of the harm it could cause, thus requires care and diligence in use. The term "ethical" doesn't apply in this analogy because an inanimate object cannot act, but an LLM can.
lutusp: Wait a sec ...> The problem AI inherits from us is that context and intent cannot be known.> Both can be omitted or lied about.This implies that neither we nor our creations can ever be ethical or safe. It follows logically that no entity can ever meet that standard. Therefore focusing on AI is arbitrary -- the focus might as well have been pit vipers or platypuses.And the article misses the point that an AI engine can be forced to imitate ethical behavior, because it has no civil rights or behavioral latitude (yet). Granted that would only be an imitation of ethical behavior, but then, so is ours.
amelius: Can an encyclopaedia be ethical or safe?Can a search engine be ethical or safe?Can an AI be ethical or safe?If you answer differently for one or more of these questions, then you'll have to say why and where you draw the line.
happytoexplain: Over the past few years, especially in places like HN, many people have made many arguments that AI is different in this or that relevant way. It's perfectly reasonable to disagree with them, but the implication of this snarky comment is that nobody is making these arguments in the first place.
dzink: Water can never be safe. Water in large quantities can drown anyone. When mixed with the wrong things it can turn into chemical reactions. Water safety depends on context and intent.So if we consider AI a chemical substance - if inserted in with limited context in tools with specific intent, can it be useful beyond tools available at this moment?You can trust just any liquid that looks like water, just as you can trust just any model or especially any inference provider (they can switch models to save money or mess with other key parameters, or insert ads). You have to test your water supply and your AI supply regularly. And benchmark new sources. We’ll see labeling and quality guarantees in future suppliers. We’ll see personal models and model families trained and refined as brands for reliability. Bottled neatly for you by certified suppliers.In the mean time we all just found our selves out of a desert and splashing around in this funky thing that we now find on the ground and falling for free from clouds.
josefritzishere: That's a bit of a polemic argument. Water is required to live. AI is a word guessing machine we often use as a fun toy.
happytoexplain: I don't think the writeup is very good, but the thesis is not being engaged with honestly in these comments.Knives, books, water, calculators, encyclopedias, search engines: Just a few of the analogies being made with barely a word beyond "it's like X". In fact, the opposite: Demanding that other people make arguments that AI is not like X.Analogies are almost always just a pithy, empty distraction. They are the fodder of low-quality internet conversations. It should be obvious why an analogy is so often reached for - if an argument about X can't be supported on its own, it's easy to point to another thing, Y, with some similarity, but which more easily fits the argument in other ways, and... just assert that they're the same.Here's a dumb analogy: Yes, "it's just a tool." So is C4.
undecisive: Analogies are not the problem. In fact, an analogy is like a good knife; sharp, removes problematic parts, and totally unethical unless it knows the motivations of its wielder.Seriously though, yes it is obvious why analogies are so often used, but I think you have it the wrong way round. They are a form of proof by negation; you don't have to find a thing exactly like the subject of the argument.It's a way of fighting against bad arguments; If I say China is bad because X, Y and Z and also, their flag is red! They must be evil. If I then tell you that this argument could also be applied to the Red Cross/Crescent, I have negated your argument by analogy. I don't have to negate every argument you made; but at least then we can treat X, Y and Z on their own.The problem with this writeup is, there really are no other powerful arguments in it.And I'm pretty sure C4 is great for controlled demolition of highly dangerous buildings. Or do you want adventurous people to hurt themselves?
Both ethical and safe conduct depend on context and intent.
ckastner: > The reason is this:> Both ethical and safe conduct depend on context and intent.The same apples to knives, and they can be plenty useful, and used in a safe manner.
tombert: I suppose that the argument could be made that knives are inherently unsafe, and that no matter what it is important to always treat it like it's unsafe. This doesn't imply that you shouldn't use knives, just that you should be aware of the inherent unsafety of it?I don't know, I didn't really agree with the post, I'm trying my best to steel man it.
marshray: "AI will never be entirely ethical or safe because it's like having a knife, a gun, a hardware store, and a medical doctor, all in one convenient interface."