W.r.t. vibe coding, LLM-assited SWE, I think we’re going to need to wait a long time.
In my experience it takes at least a year, and sometimes even 3 years from the time a given software project starts to be worked on in real-life business settings, until it really fully develops, settles and “cracks begin to show”.
Initially everything is great. Biz is painting beautiful visions, devs are adding features, bugs are fixed and everything progresses relatively fast.
But sooner or later a successful projects starts gaining users, features, use-cases, workload and only then you get a real feedback on how actually well (or not) was it designed and implemented. Stuff like: architecture turned out way slower than expected, data loses due show up, certain things can’t be implemented at all because of deficiencies of the data model. The more or less implicit “this happens once in a million, so it’s not an issue” begin to appear. Some original authors left, and understanding the system is hard. The test suite got large, and now it’s way too slow. With actual workloads the cloud bill become (very) substantial. The “simpler approach” accumulated bunch of hacks for things it didn’t handle, and now is not simple at all.
My point being - only after longer period you get to see the lasting long term consequences of your technical decisions.
With LLMs I expect it to be somewhat similar. What are the efficient ways to handle LLM, how much to let it do things vs hold its hand, what patterns of harnesses and methods to use. You can’t judge it too early.