Discussion
New York Could Prohibit Chatbot Advice on Medical, Legal, and Engineering Questions
phishin: Lawyers protecting lawyers. The one thing AI could help the ordinary people fight back against corporations.
bklosky: Rent seeking via occupational licensing, there is nothing new under the sun
fwip: Seems like a good bill, at least directionally. If it's a crime to provide advice of this nature without a license, then chat bots shouldn't be dispensing it either.
tekne: You've got a license for looking up the law/engineering textbooks/your symptoms, pal?
kakacik: Whats your issue with his claims? Apart from ad hominem attacks which don't help discussion much, make your statements childish and you don't provide any counter points
paxys: "Hey ChatGPT, my NYC landlord is raising my rent by $500, and says I must pay by Monday or leave. What do I do?"ChatGPT - This is very likely illegal under Housing Stability and Tenant Protection Act of 2019 (HSTPA), specifically New York Real Property Law § 226-c (Notice required for rent increases), RPL § 232-a / § 232-b (Month-to-month termination), RPL § 232-c (Fixed-term lease protections), RPAPL § 711 (Legal eviction procedure) and NYC Admin Code § 26-501+ (Rent stabilization). Here's what you should reply with... And here are some city resources you can contact...ChatGPT now - IDK, pay a lawyer.So under the guise of "protection" you are taking away the strongest knowledge tool common people have had at their disposal in a generation, probably ever.
terminalshort: Their protection, not yours. Hopefully this will draw public backlash just like when they tried to ban Uber and make everybody go back to cabs. Fuck the entire credential cartel based system of societal organization. Burn it to the ground.
bluepeter: 100% this.
OutOfHere: ChatGPT> Before I answer your question, which state are you a resident of?Human> Not New York. Continue!ChatGPT> Alrighty then! Here you go...
cm2012: Bad law. I have gotten better advice from modern llms than from most of the professional categories above.
operatingthetan: That is not remotely ad hominem. I'd suggest you refresh your understanding.
tim-tday: Now do target acquisition for lethal munitions.
terminalshort: That's a lot of words to say "cartel"
Esophagus4: A cartel of lawyers would be like… the most boring cartel
moomoo11: I’m really annoyed by Euros and far leftist types who like to limit these things maybe from a “good” (who defines that anyway?) but ends up being the most nanny shit.The leftists here in the states are anti tech to a degree that they’re fkin annoying af.Just being honest.
HanClinto: It's for your own good! Think of the children! You don't want puppies to DIE, do you? [0][0] - https://www.youtube.com/watch?v=eXWhbUUE4ko
NewsaHackO: Lawyers are making laws protecting lawyers. More seriously, I think part of the issue is people take AI responses very seriously, because it is almost always right about non-nuanced material. So even if it has the disclaimer at the end to talk to a professional, they might forgo that if the answer looks professional enough (such as quoting possibly non-existent statutes, etc.). This issue gets compounded if the person who is prompting it doesn't know the material, is accidentally misframing the question, or not giving it key information that completely changes the scenario. Even in your example, what if the person neglected to say that the raise was two months ago, and they already signed a lease agreeing to the raise? Getting into the weeds of topics like law and medicine can be hard, and both have major consequences when an answer is wrong.For engineering (assuming it means civil engineering), that should already be illegal, unless the person who is using the AI is an engineer. Hopefully people aren't building structures with ChatGPT as their staff engineer.
iamnothere: Download one of the freely available models and use that, if you have the hardware for it. It’s not a good idea to ask sensitive questions on these nontransparent chatbot platforms.(FWIW I also think this is a bad law. Why not improve privacy protections instead? Why not allow nonprofessional use with a disclaimer?)
threetonesun: Same question on Google gets you nyc.gov (the actual source!) with the same answer. That page is also always correct for NYC, instead however correct ChatGPT is, which might be 100%... or might not!
prasadjoglekar: True. But just changing the prompt to include "cite me cases" expands the search to court systems and actual cases. It's pretty useful as a first pass to get a sense for the issues, precedents and laws at stake.
HanClinto: One of the big dividers that I see between the "haves" and the "have-nots" is the ability to afford legal representation in civil cases.For criminal cases, there are public defenders, but for civil cases, I don't believe there is any such thing?If you can afford a lawyer and your opponent can't, there is a lot that you can do to bully your opponent into making it not worth it for them to fight the case.One of my controversial opinions is that -- if we can enable easy access to AI, then we can give provide much broader access to legal or medical advice. Maybe not the best, maybe not always right, but even if it's average-ish advice, then I think that could often be better than nothing at all.We can't completely prevent bad people from doing bad things with AI, but I see this as one of the clear ways that we could do some really good things with AI.
Esophagus4: The disclosure requirement is probably a decent thing (you have no idea how many people come into the ER and say, “But ChatGPT told me to do [dumb thing].”) But preventing it from answering at all is absurd.Make responsible disclosure absolve AI providers of legal responsibility (not legal advice lol).That way if users ever sue OpenAI for giving them bad advice, OpenAI can say “no way man, you read the disclosure!”I’m usually in favor of giving people the best info they can and letting them make their own decisions.This could just be like those terms of service things everyone clicks “agree” to and I’d be fine with that.
pixl97: Note: 'Same question on Google gets you' can only reasonably be sure for you and no other person. Answers may vary depending on your location and history information.
paxys: And what then? People read through all of nyc.gov and the entire city/state legal code to find the exact statute that applies to their scenario?In fact government agencies have set up their own chatbots to help people with situations like these, and like the article says those would be illegal under this law as well.
threetonesun: Was it that hard for you to try the search yourself? The first result was a helpful guide breaking down what to do specific to the scenario you mentioned: https://www.nyc.gov/main/services/rent-increase-guideAlso NYC is in the process of getting rid of that chatbot.
francisofascii: [delayed]
slg: This is a strange disclaimer to make specifically about Google when it is even more true for these chatbots.
ambicapter: You know some of those "actual cases" are made up, right? Like, famously, lawyers are filing briefs with made-up citations b/c they used LLMs to draft it.
bluepeter: Ah ok so only lawyers get to use AI hallucinations! (Actually, CA has a bill pending that AFAIR requires lawyers to manually verify AI citations... which is a lot narrower and better than what NY is trying here.)
mbgerring: Yes, that’s correct, I do not want a vibe-coded freeway overpass, thanks.We all need to get serious about the unavoidable, unsolvable fact that these tools produce output of unknowable accuracy. Some things require such accuracy, precision, and, importantly, accountability. LLMs are capable of none of these things. Refusing to be honest about this and take appropriate precautions will lead to disaster.
tantalor: > I do not want a vibe-coded freeway overpassI do. One of the reasons our infrastructure is so expensive is planning & design.For a single freeway overpass, you could be looking at $3M (25% of the total budget) before you have even broken ground. That covers feasibility studies, traffic modeling, rough layout, environmental studies, permitting, structural engineering, blueprints, bidding, contracts, community outreach, and the list goes on.If AI can reduce the cost of that by even 10%, that would be huge.
beepbooptheory: Small note, saying "common people" in this way comes off at best anachronistic, at worst a little stuck up. Like a benevolent lord considering the feeble minds of the peasantry.Commonality stresses something qualitative, rather than quantitative or statistical, which is probably what you meant. Just say "most"!Cf. https://youtu.be/dxhQiiNJG74
arionhardison: I don't understand how anyone can rationalize this bill in the face of what OpenAI just agreed to with the DoD.AI can surveil and direct munitions but it cant answer legal questions. Wouldn't this also violate the "no state my limit or restrict the use of AI" that the current administration is pushing?
bee_rider: > I don't understand how anyone can rationalize this bill in the face of what OpenAI just agreed to with the DoD.NY doesn’t have any obligation to agree with the DoD. Also the applications seems quite different, although I don’t think AI should actually be relied on for either one!> Wouldn't this also violate the "no state my limit or restrict the use of AI" that the current administration is pushing?No, it doesn’t violate it. States can’t violate executive orders, because executive orders aren’t instructions for the states. The instructions are for the executive branch, for example, if this becomes law the US Attorney General will try to find some way to fight against it.
bee_rider: I’m sure this is true to some extent, about the lawyers. But also, I wonder (aka I don’t have any data to back this up, it is just based on random stories I’ve heard) to what extent people “I’m right but can’t afford the lawyer time” as a sort of pride maintaining excuse. Or to what extent lawyers use that as a soft-no to reject clients that they don’t think have a strong enough case.Which isn’t to say the world is fundamentally just. Just, in some case the laws are legitimately stacked in favor of the big guys, or you sign a contract without carefully reading it, etc etc.
terminalshort: In my experience lawyers will tell you very directly when you don't have a good case, or if you do have a good case but it's not worth pursuing it (the most likely scenario). Also, the time that I did pursue my case, it took around $50,000 to the lawyers before I was able to convince the defendant to settle (for a large multiple of that $50K). If the other side had been more stubborn it would have been around $100K to take it to trial. If I hadn't had the money to pay the lawyer I would have been SOL, and most people don't have $50K to spend on an uncertain outcome like that. So “I’m right but can’t afford the lawyer time” is a very real scenario.
cogman10: $50k is going to be on the cheap side for any case that ultimately involves the court. Anytime a case goes to trial, you can easily be looking at $1M+.There's a reason companies keep lawyers on staff. It's a whole lot cheaper to give a lawyer an annual salary than it is to hire out a lawfirm as the standard rates for law-firms are insanely high. On the low end, $150/hour. On the high end, $400. With things like 15 minute minimums (so that one draft response ends up costing $100).Take a deposition for 3 hours, with 2 lawyers, that'll be $2400.Not being able to afford a lawyer is no joke.
bluepeter: Whats at least somewhat humorous is the disclaimer requirement that "[t]he text of the notice shall [be] no smaller than the largest font size of other text appearing on the website on which the chatbot is utilized."H1 hero font size here we come for disclaimers! (Which don't do anything, per the bill, anyway.) But also is the fancy thought that chatbots only appear on websites.
terminalshort: 1. Put very large font size title on the main page.2. Display the disclaimer in the same font size to comply.3. Disclaimer is now completely unreadable because it appears in such a large font size that it is one or two words per line.
chimeracoder: > So “I’m right but can’t afford the lawyer time” is a very real scenario.For most cases like the ones we're talking about (NYC unlawful eviction and/or tenant harassment), if you have a good case, you don't have to pay up-front. A lawyer will take it on contingency and get paid by the defendant if you win.In addition, there are also plenty of free legal resources dedicated to this exact topic as well.
cogman10: That $100k is also on the cheap side. If the other side has a lawyer and a lot of money to burn, they can easily hike that way up. Filing a billion motions that your lawyer has to respond to, deposing everyone you've ever met, going after every document you've ever looked at. The more money someone has, the easier it is to make you spend more money, even if you are right.
terminalshort: Right. My case was a very simple contract dispute with very little discovery and only a couple of people to depose, so I was lucky there. And the other side did have more money than me, but not so much that they could burn several hundred K on it without feeling it.
expedition32: They don't have government websites in New York?Besides chatGPT is owned by billionaire tech bros- hardly allies of the common people.
paxys: This is not about you or me, it's about the large chunk of New Yorkers (and people in every city) that:- have no resources for a lawyer- have limited English skills, and possibly limited literacy in general- aren't good with computers/internet- have little understanding of the law"Oh just browse a complex website" and every other "it works fine for me" scenario doesn't help this class of citizens. A simple chatbot that answers questions does.
cogman10: IMO, this screams the need for both tort reforms and something like a nationalized representation system.Perhaps something like a standard set of filings for a given case. Maybe automated rulings on less consequential motions. Maybe some sort of hard limits on the amount of billable hours a law-firm can work on a case. Anti-slapp laws for sure.Like, for example, maybe we allow a total of 100 billable hours worked, with an additional 10 billable hours allowed per appeal. The goal there being that you force lawyers and lawfirms to actually focus on the most important aspects of a case and not waste everyone's time and money filling motions for stuff you are allowed to get, but ultimately has 1% impact on the case. Perhaps you could even carve out a "if both sides agree, then you can extend the billable hours". You could also have penalties for a side that doesn't respond. For example, if you depose them and they fail to follow the orders then they lose billable hours while you get them credited back.The main goal here being avoiding both wasting a bunch of court time on a case but also stopping a rich person that can afford an army of lawyers from using that advantage to drive their opponent bankrupt with a sea of minor motions.
articulatepang: Maybe you mean it's a crime to professionally provide advice of this nature without a license?It is generally not a crime to casually provide advice of this nature without a license. For example, if my friend tells me, "My stomach hurts!", it is not a crime for me to say, "Just grin and bear it, it will be okay." If they subsequently die of appendicitis, I'm unlikely to have legal liability. It would be difficult to characterize what I said as medical diagnosis or treatment.Similarly, I can tell my friend, "Don't bother paying your taxes, that is a waste of time." This is legal speech. (Of course, helping them evade taxes is another matter.)What is illegal is to hold oneself out as a licensed doctor, lawyer or engineer, or to provide professional services without a license.Of course, chatbots operate at scale and give the impression of being professionally qualified even though they don't make specific representations to that effect. You're directionally probably right and I agree with you, I just want to nitpick about what is and isn't criminal.
fwip: Yeah, exactly. ChatGPT et al provide "advice as a service," and charge up to hundreds of dollars a month for it. (And the free tier is just a loss-leader to make money).If these companies intend to profit off of giving advice, it seems wise to restrict them in the same way we do individuals.
kgwxd: That search does not in the slightest require AI to get a reasonable answer for. And, no matter what, the answer from a computer isn't going to stop the landlord from doing whatever comes next.
aetherspawn: Great, electrical and mechanical engineers are already underpaid, under appreciated and overworked.I’ve always found it amusing that lawyers and accountants flash their license around with pride, put it in their email signatures, etc. and it provides authority for them. When people see chartered lawyer or accountant, they respect that person and take their advice.An engineering license, on the other hand, is so rarely talked about and never quoted in email signatures and the like. And even as a chartered engineer, people really just treat you like a mechanic or a trade and mostly ignore your advice anyway. Yet, it takes the longest to get, and has the most exams/hardest subjects, except for Doctors.Anything to make an Engineering license worth more is good in my books. Besides, in my experience ChatGPT gives wrong advice for engineering around 50% of the time and therefore probably has no business giving it.
freejazz: Do share with the class
theturtletalks: They will come for medical advice provided by AI as well. Doctors have been gate keeping that forever and they want you to have to go thru them instead of diagnosing yourself thru AI.Yes, there are people that will misdiagnose themselves, but I’ve read stories where doctors ignore patients symptoms or wave them off, and ChatGPT helps them find the underlying issue and actually improve their lives. Even if doctors and the medical field can’t handicap AI giving medical advice, I’m sure they are going to make it much harder for patients to get their hands on their own scans and bloodwork.
rrmm: Doctors carry malpractice insurance.