Discussion
The Future of Everything is Lies, I Guess: New Jobs
elcapitan: Is there a way back to calling human beings human beings and not "meat"? Or is the sociopathic Jeffrey Dahmer undertone now the new normal?
sp527: Sure. Would you like WWII, medieval-era Christianity, or Khanate Asia?
sebg: humans are made of meat -> https://news.ycombinator.com/item?id=47688678
pmg102: https://archive.is/OjGox
gordonhart: Why post an archive link for a static site with no ads or subscriptionware?
sebg: https://news.ycombinator.com/item?id=47779352
dsmurrell: "Unavailable Due to the UK Online Safety Act" - without my VPN... do you know why?
nonameiguess: This is part 9 of a 10-part series. The author has posted every chapter to Hacker News every day for the past 9 days. Every time four of the first five or so comments are:Someone noting it is unavailable in the UK.Someone posting an archive.is link.Someone asking why the above posted an archive link to a static site.An answer that it is because the content is otherwise unavailable in the UK.Do we really need to see this every single time?I realize I am also not adding to the real discussion now as well, but Jesus Christ, this is irritating. Can we get a new rule that an author posting their own content, knowing it is unavailable in the UK, has to post their own archive link and explain why they're doing so as part of the submission?
kreco: I wish we could flag some posts (like as "tangential") instead of this archaic upvote/downvote.And obviously a way to filter in/out those flags.
nemomarx: Geo blocking the UK satisfies any age verification, otherwise the site owner would have to check if their content is considered adult in the UK and implement something.
nmeofthestate: "otherwise the site owner would have to check if their content is considered adult in the UK and implement something"IMO a small blog website is not going to get pulled-up for this - it's about the author making a point. They're entitled to do so of course.
gordonhart: I see, thanks. Strange times for internet censorship in the west.
sebg: Agreed. If I remember correctly from the other sections of this post, I think it's a self-directed ban.
ai_critic: I think that this is an interesting attempt at taxonomy, but it's a bit on the magical thinking end (and I say this as somebody that does a good amount of what's described as the incanter role). It's a combination of the author's previous witchy aesthetic (see his excellent "<x>ing the technical interview" series) and progressive labor politics (which are asymptotically doomed in the current automation push).The biggest failure of imagination, I think, is the assumption we'd use humans for most (or *any) of these jobs--for example, the work of the haruspex is better left to an LLM that can process the myriad of internal states (this is the mechanical interpretation field).
mitthrowaway2: Yes, I had the same impression. The "process engineers" would themselves quickly be replaced by an automated system. The "statistical engineers", I think, would never be able to keep up with the rate of change of the AI models, which would likely have different statistical behavior and biases in each language/context/etc with each update, and so it's unlikely anyone would pay them to develop that required deep expertise in the first place. More likely, that work would be done at an AI foundation model company -- but it would be done just once, and then incorporated into the training process.
Aperocky: As an engineer, I'm never more excited about this job.My implementation speed and bug fixing my typed code to be the bottleneck - now I just think about an implementation and it then exist - As long as I thought about the structure/input/output/testability and logic flow correctly and made sure I included all that information, it just works, nicely, with tests.Unix philosophy works well with LLM too - you can have software that does one thing well and only one thing well, that fit in their context and do not lead to haphazard behavior.Now my day essentially revolves around delivering/improving on delivering concentrated engineering thinking, which in my opinion is the pure part about engineer profession itself. I like it quite a lot.
rootusrootus: > My implementation speed and bug fixing my typed code to be the bottleneckI remember those days fondly and often wish I could return to them. These days it's not uncommon to go a couple days without writing a meaningful amount of code. The cost of becoming too senior I suppose.
simonw: Anecdotally I've been observing a significant uptick in the amount of code being produced by my peers who are in senior engineer, leadership and engineering management positions.They can take their 20+ years of experience and use it to build working systems in the gaps between meetings now. Previously they would have to carve out at least half a day of uninterrupted time to get something meaningful done.
ej88: I am personally of the opinion that ML will end up being 'normal technology', albeit incredibly transformative.I think you can combine 'Incanters' and 'Process Engineers' into one - 'Users'. Jobs that encompass a role that requires accountability will be directing, providing context, and verifying the output of agents, almost like how millions of workers know basic computer skills and Microsoft Office.In my opinion, how at-risk a job is in the LLM era comes down to:1: How easy is it to construct RL loops to hillclimb on performance?2: How easy is it to construct a LLM harness to perform the tasks?3: How much of the job is a structured set of tasks vs. taking accountability? What's the consequence of a mistake? How much of it comes down to human relationships?Hence why I've been quite bullish on software engineering (but not coding). You can easy set up 1) and 2) on contrived or sandboxed coding tasks but then 3) expands and dominates the rest of the role.On Model Trainers -- I'm not so convinced that RLHF puts the professional experts out of work, for a few reasons. Firstly, nearly all human data companies produce data that is somewhat contrived, by definition of having people grade outputs on a contracting platform; plus there's a seemingly unlimited bound on how much data we can harvest in the world. Secondly, as I mentioned before, the bottleneck is both accountability and the ability for the model to find fresh context without error.
netcan: In some sense, technology is "not normal" regardless.If we think of the digitization tech revolution... the changes it made to the economy are hard to describe well, even now.In the early days, it was going to turn banks from billion dollar businesses to million dollar ones. Universities would be able to eliminate most of their admin. Accounting and finances would be trivialized. Etc.Earlier tech revolution s were unpredictable too... But at lest retrospectively they made sense.It's not that clear what the core activities of our economy even are. It's clear at micro level, but as you zoom out it gets blurry.Why is accountability needed? It's clearly needed in its context... but it's hard to understand how it aggregates.
bobthepanda: Accountability is really a way to address liability. So long as people can sue and companies can pay out, or individuals can go to jail, there is always going to be a question of liability; and historically the courts have not looked kindly at those who throw their hands up in the air and say “I was just following orders from a human/entity”
pixl97: >nd historically the courts have not lookedThis is dependent on having a court system uncaptured by corruption. We're already seeing that large corporations in the "too big to fail" categories fall outside of government control. And in countries with bribing/lobbying legalized or ignored they have the funds to capture the courts.
hombre_fatal: I mostly agree with you.Though something I half-miss is using my own software as I build it to get a visceral feel for the abstractions so far. I've found that testability is a good enough proxy for "nice to use" since I think "nice to use" tends to mean that a subsystem is decoupled enough to cover unexpected usage patterns, and that's an incidental side-effect of testability.One concern I have is that it's getting harder to demonstrate ability.e.g. Github profiles were a good signal though one that nobody cared about unless the hiring person was an engineer who could evaluate it. But now that signal is even more rubbish. Even readmes and blog posts are becoming worse signals since they don't necessarily showcase your own communication skills anymore nor how you think about problems.
the_af: > But now that signal is even more rubbish. Even readmes and blog posts are becoming worse signals since they don't necessarily showcase your own communication skills anymore nor how you think about problems.Yup. I've spotted former coworkers who I know for a fact can barely write in their native language, let alone in English, working for AWS and writing English-language technical blog posts in full AI-ese. Full of the usual "it's not X, it's Y", full of AI-slop. Most of the text is filler, with a few tidbits of real content here and there.I don't know before, but now blog posts have become more noise than signal.
abstracthinking: Humans will be held accountable, not machines, whatever is the technology used. The jobs you suggest are based on the state of LLM right now, this could change rapidly, considering the state of progress. These are just activities that are already done by people that work with these instruments, because they want to optimize and obtain the best/safest output from these machines.
the_af: > Humans will be held accountable, not machines, whatever is the technology usedIsn't this addressed explicitly in TFA, in section "meat shields"?As for the rest, if you predict even the jobs described in TFA will be obsoleted by future LLMs+tools, then the future is even more dire than predicted by Aphyr, right? Fewer jobs for humans to do.