GPT-3
Text generation has way more applications than people think, i.e., generating flag design ideas based on the text description.
The scaling hypothesis suggests that networks get better as we increase datasets and compute. If this is true, then we'll have *way* more powerful nets in a few years that will not be "GPT-3 but better" but qualitatively different, just like GPT-3 is from GPT-2.
Meta-learning means that after seeing enough patterns, a network picks up the ultimate one, how to learn new things, and becomes capable of learning new tasks that it wasn't initially trained for. For example, I can program GPT-3 to write blog posts in my style with just a few prompts.
People don't want to pay for stuff created by a robot the same bucks they pay for human work; this is interesting because it means we'll have some sort of human-AI collaborations in the future to justify the price.
Hardware demand will rise dramatically in a few years because now everyone knows that it's possible to run a mile under 4 mins.
What made GPT-3 possible is not data or compute - these things already existed and Google had 10x more of both - but the courage that founders had of their convictions, that it could be "just like that," a ton of data and a ton of compute.
The 4-minute miler principle suggests that before the first person ran a mile, it was considered impossible, and nobody tried. But once the record was broken, hundreds of people did it in the next decade.
The atomic bomb's secret was that Americans knew it was possible; they knew it was possible to run a mile under 4 mins, and they did it.
GPT-3 creates even more asymmetry for creative work; low-level creative work such as "implementing features" or "writing markup" or "editing the text" will be automated away, leaving the pure creation.
GPT-3 will change writing; i.e., a team of five skillful writers armed with GPT-like technology will outperform a large corporation.
In the future, creative jobs will be "best experts+AI"; this is already happening in chess and design.
Personality, tone, etc. - all these things can be "programmed" in GPT-3 with simple textual prompts as if you were describing a task to a human assistant.
"Email from bullet points"; we're likely to end up in a weird place where both parties communicate by transforming simple bullet points "busy, next week" into "Dear John, I'm terribly sorry to disappoint you, but it seems unlikely I can attend this week's workshop due to extreme amount of workload..". Although none of them would really think in those terms, it'd be just an abstraction that we conform to.
Software
80% of b2b software is built for internal use.
The thesis of end-user programming is that you know your problem best, and transferring that domain knowledge takes a ton of time and energy.
In 50 years, it will seem nuts to people that in 2020 we had this elite cast of people who knew how to command computers to do what they wanted by typing special green symbols in a black terminal window.
There are 2 types of programming: a) thinking what the program should do, and b) actually implementing it - the first one is way more valuable and rare.
Misc
People who complain about google spying on them must first ask themselves if they're willing to disregard all google services for "complete" privacy if such a thing is actually possible.