

This kind of ML/AI work has been going on for a while now. Text recognition, image recognition, pattern detection, predictive analysis and may more types of work actually benefit a ton from this recent LLM fad.
The biggest difference in the output between more “normal” AI systems and LLMs is that LLMs seem much more confident in incorrect responses. If you use something more traditional, in say, facial recognition as an example, you can see immediately if the AI determines a hand drawn smiley face is an actual human face.
AI/ML systems have always been error prone in one way or another. A biggest issue I see is how results from these systems are presented.
Shit captions are what ticking tocks is all about, right?