The video discusses the current state of AI, including milestones in predicting and reading images, but notes that robust machine reading and common sense reasoning are still far away, and highlights concerns about the need for independent review and limits on growth as AI continues to advance.
Large language models can be computationally universal with access to unbounded external memory, as shown in a January paper, while a new llama model demonstrates the plateauing of model performance improvement.
A paper from January describes augmenting Palm with read-write memory to remember everything and process arbitrarily long inputs, including a universal turing machine, showing that large language models are already computationally universal if they have access to an unbounded external memory.
Improvement in model performance levels off after a certain point as shown in a paper on Messa's new llama model.
The debate over AGI's capabilities is subjective, while text to image generation is a new frontier led by Microsoft and Google, but rewarding models based on good process is crucial.
The task left before AGI is a deeper and more subjective debate, where only obscure feats of logic, deeply subjective analyzes of difficult texts, and niche areas of mathematics and science remain out of reach.
Text to image generations are the new story of the century, with companies like Microsoft and Google leading the way, but it's important to consider rewarding models based on good process rather than just quick outcomes.