Imagine you are on a train platform. Out in the open, you can see very far away. If your eyes were to follow the tracks they would reach the horizon line, where you would see nothing. A second later, you would see a far away dot. The blink of an eye and a blaring train goes through the station. A moment after, you are thinking you did very well to not be closer to the tracks, only to realize that you were hoping to go on that train… Yet, it did not stop and it is already a blip on the horizon.
We believe we will always be able to catch up on progress when it will be relevant. It seems however that technology is getting faster and faster. This metaphor asks a crucial question: could we afford to wait for visible effects to catch up? Or can we only hope to catch up?
What is “catching up” here? As an individual, it is probably keeping our relevancy. As a society, it is likely planning for some individuals to lose their relevancy or for a massive and unforeseen technological edge that could be leaning toward a single direction. It could also be to plan for unwanted effects and risks.
If you are wondering how real is this metaphor to describe our era I invite you to read I lost everything that made me love my job through Midjourney over night.
Now if you had asked me what I thought would be safe jobs to automation ten years ago… I know you have not, but bear with me for a moment.
Art and art related occupations felt pretty safe, if not the safest around automation. A lot of manual jobs are still pretty much untouched but mechanizing manual labor had been around since more than a century, and it feels that doing the next step is very reachable. A lot of desk jobs seemed already out of time. I am thinking about the army of low level workers in accounting typing in invoices, their legions of peers in legal firms relentlessly copy-pasting from one template to another.
If you wanted to automate something, you needed to be able to measure your performance at it. And measuring “how good is a piece of art” is subjective. Even if some artists and critics are quite good at articulating opinions, those still remain hard to support by measurable metrics, hence very hard to optimize against.
I have used the past tense “needed to be able to measure performance”. It happens that being able to model a kind of statistical distribution is enough to achieve impressive results.
But forget about me, there is a long list of people being very relevant to their fields and very wrong at their predictions. Even if a fringe subset had been right, we would not be able to tell them apart and thus unable to use their wisdom.
Because of that, the train will very likely not stop for us. As Nick Bostrom, stated it:
The train doesn’t stop at Humanville Station. It’s likely, rather, to swoosh right by.
I took here the example of generative art, because it has been months since it had demonstrated applicable capabilities. But I have also observed myself switching from only toying with ChatGPT to actually use it many times per day after the release of version 4.0. The impressive yet marginal increase in quality, broke the threshold at which it becomes an obvious and adequate tool for many things.
I am still aware of the imprecision and hallucinations issues when using it. It is not impossible that because of the underlying models it will never fully get fixed. But with a different limit for everyone we have to admit: good enough is fine. #chabuduo for the win? Every human is also imprecise and inventing things sometimes, and most of our activities do not require clockwork precision all the time.
Large language models and their derivatives’ enhancements will likely follow the laws of diminishing returns however it has not really displayed any slow down at the moment. Especially in terms of perceived value.
It is now hard to trace a line in the sand about what performance it will not reach.