Phases of the AI revolution

This has almost certainly been the most consequential year for AI in its 70 year history. We are now undergoing a full-blown AI revolution which is unfolding in phases. 

Phase 1 is the large language model, which essentially, for the first time, has allowed humans to talk to and question a staggeringly knowledgeable computer in ordinary human language. For anyone engaged in activities where accuracy is critical, such as many aspects of healthcare, the most vital improvement needed in LLMs is ways of checking that the information given is absolutely correct. There is a lot of work going on in this area.

The LLM phase will continue, and probably speed up; it is already now taking in visual and audio input which will further enhance its incredible power.

Phase 2 is now surfacing in the form of high quality, amazing image and audio/video apps which can create, enhance, and change almost anything you can see or hear. For filmmakers it will allow an extraordinary leap in what small budget productions can produce.

Phase 3, which will increasingly include and combine the first two phases, will be the rise of the autonomous robot able to do a vast range of tasks that previously only humans could do – and talk to you. Robots will soon be able to do tasks better and more quickly than a human and tirelessly. Test robots are already doing repetitive tasks in some settings and in some hospitals, for instance, they are helping with logistics, sanitation and (apparently) patient care. Over time, in a vast array of workplaces, there is at least the possibility that they will become either an adjunct to almost every worker or replace them entirely.  

Phase 4 is the autonomous vehicle revolution. This has taken much longer than previously thought, but, having faced challenges at the consumer vehicle level (though considerable progress is being made), it is being introduced into large vehicles and vehicles with known routes, like shuttles. A full-scale test for autonomous trucks is scheduled in Texas next year (Dallas to Houston), and in North Carolina, two university campuses already have autonomous shuttles, including one which is integrated into the campus light-rail station. On many large work and education campuses you may soon see similar autonomous shuttles and also delivery vehicles. It is unlikely to be long before autonomous robots are present in delivery vehicles and they could also help the infirm and disabled get off and on shuttles.


In the near future, it is possible that:

  • ChatGPT and similar chatbots/LLMs should be able to help teach almost anything, because it will have full video integration and be able to understand all kinds of student responses, text, audio and video. It’s almost there now, so this is a reasonably safe prediction.
  • More autonomous robots will be tested in safe realms.
  • You will be able to create simpler videos of high quality at will – including training and instructional videos.

Much of this would have seemed preposterous even just a few years ago, but with the LLM breakthrough a year ago, everything else gets easier, not least because of its ability to code increasingly well and powerfully.

 

Please share if you liked this article.