Skip to main content
  1. Blog/

Is AGI impossible ?

·210 words·1 min

Current AI systems contain a host of deep learning models that are built on vast amounts of data scrapped from the internet. There are many different hype cycles in the deep learning space and the current one is no different. The brain and the generalness that it can compute is built on a different architecture that we don’t really understand fully since we lack a complete picture and we are not really working hard to integrate all our understanding into grand unified models of biological systems.

The brittleness of these AI systems becomes apparent when they encounter scenarios outside their training data, and they fundamentally lack the flexible, adaptive reasoning that characterizes human intelligence. The really hard problems in this space are solved by programmers and researchers.

With constant goal post shifting, AGI could always 5-10 years away every 5 years from now.

Core Issues:
#

  • Hallucinations.
  • Bad training data.
  • Limited context size and memory.
  • Gets worse when we reach the end of the context size.
  • Transformer architectures have limitations with size.
  • Data will all be consumed.
  • Trying too hard to please.
  • No ability to form genuine understanding or causal reasoning about the world
  • No capacity for autonomous goal-setting or self-modification. It can follow commands and that is about it.