I keep seeing a huge red flag when speaking to my network and friends at big tech companies.
It’s either complaints about being “forced to use AI and be more productive,” or bewilderment that “GitHub Copilot is so good.”
I accept that working in a greenfield startup is different from an enterprise trying to change.
However, it’s crazy to see that they’re only now adopting technologies I was using before ChatGPT.
They don’t know what’s coming…
This helped me define what “Safe AI” means to me.
We have to - not just need to - integrate superintelligence into our society. We have to leverage its capabilities to the max. At the same time, we cannot become overly reliant on them. Cognitive decline is not acceptable.
This is a very delicate balance to strike, but we’ll figure it out.
Education, at every level, will be where it starts.
P.S. I might turn this into a full essay one day. Follow my RSS feed or Substack if you want to catch it.