2024-01-23: Goalposts

In Artificial Intelligence and the Moving Of

🔷 Hey everyone, welcome to the first-ever edition of Emergent Behavior, a daily newsletter that translates what’s happening in the realm of artificial intelligence.

Subscribe to get breakdowns of the most important developments in AI in your inbox every morning.

Here’s today at a glance:

đź’¨ Meta joins the AGI chase

Zuck announced last week:

  • Meta has “long term goals of building general intelligence, open sourcing it responsibly, and making it available and useful to everyone in all of our daily lives”

  • They have 600,000 Nvidia H100 GPU equivalents of compute - to put this into perspective OpenAI’s GPT-4 was trained on 8,000 H100 GPUs for 100 days. Meta now has 75 times that number, with $10.5 billion of H100s alone.

  • They are training Llama 3 - the successor to the Meta’s popular open source language model Llama 2, which will presumably match OpenAI’s GPT-4. This puts the open source release of a GPT-4 equivalent on roughly a 3 month fuse, assuming a similar development trajectory as Llama 2.

  • They promise advances “in every area of AI. From reasoning to planning, to coding, to memory and other cognitive abilities“

Now, the interesting thing of course is that Zuck’s AI Chief, Yann LeCun, has been very negative about OpenAI’s approach to AGI, its feasibility and whether it’s even possible in a reasonable timeframe:

..autoregressive [models] like chatGPT .. simply cannot reason nor plan. ..They have a very superficial knowledge of the underlying reality.

- Yann LeCun, 01/13/2024

And in fact, Yann does not believe AGI will come from imitating higher human functions, but that base animal cognition will first have to be traversed:

Too often, we think a task is easy because some animal can do it. But the reality is that the task is fiendishly complex and the animal is much smarter than we think.

Conversely, we think tasks like playing chess, calculating an integral, or producing grammatically correct text are complex because only some humans can do them after years of training. But it turns out these things aren't that complicated and computers can do them much better than us.

This is why the phrase "Artificial General Intelligence" to designate human-level intelligence makes absolutely no sense

- Yann LeCun, 01/13/2024

Zuck pushes back on this, however, saying:

I don’t have a one-sentence, pithy definition. You can quibble about if general intelligence is akin to human level intelligence, or is it like human-plus, or is it some far-future super intelligence. But to me, the important part is actually the breadth of it, which is that intelligence has all these different capabilities where you have to be able to reason and have intuition.

- Mark Zuckerberg, the Verge

The crux of this debate goes way back, and it was just a fond intellectual exercise, until the talent started to move:

..The planning expert at OpenAI is Noam Brown, who worked on Libratus (poker) and Cicero (Diplomacy) at FAIR (Meta), both of which use planning. I suspect he has something to do with Q*. I don't think it's the kind of breakthrough the Twittersphere makes it to be.

People need to calm down.

- Yann LeCun, 11/23/2023

That then brought down the hammer from on high:

.. we need to build for general intelligence. I think that’s important to convey because a lot of the best researchers want to work on the more ambitious problems.

- Mark Zuckerberg, the Verge

Which is really a fascinating window to a meeting where the CEO tells the research chief, “You’re telling me we lost our planning guy to OpenAI, and he may end up inventing AGI? …I don’t care what you call it. The market calls it AGI, the market wants it, and that’s what we’re going to give them. Posthaste s'il vous plait.”

🏔️ Sam Altman on AGI in Davos

Far across the not-so-Narrow Sea, in the wintry castle of Klaus Schwab upon the mountains Alps, our prince who once was and is once more, in between selling enterprise software on the one hand and raising oil money on the other (why else do you think tech people go to Davos, duh), reveals to the world:

  1. “But I think AGI will get developed, in the reasonably close-ish future, and it’ll change the world much less than we all think. It’ll change jobs much less than we all think.“ - Youtube

  2. “GPT-4 is best understood as a preview… progress here is not linear… what does it mean if GPT-5 is as much better than GPT-4 as 4 was to 3, and 6 is to 5“ - Youtube

Those two statements of course, are not contradictory at all, are they? Wait, did he say “6 IS to 5“, “IS“? As in present tense? As in a product that exists?

Anyway, most of the interview was spent normalizing AI as a tool, vs AI as Machine God. Well, we do hope it is a very useful tool (so do those enterprise software buyers).

🤖 AGI has been achieved: Prompt engineering edition

The team at codium.ai introduces what they call a “flow engineering” solution to computer code generation, which in essence, uses up to 100 calls to GPT-4 per coding problem to:

  1. Reason about the goal, inputs, outputs and constraints of the problem

  2. Create tests, and reason why inputs leads to outputs in each test

  3. Generate 2-3 potential solutions in English and rank them in terms of correctness, simplicity and robustness

  4. Iteratively pick a solution, generate code and run tests. Repeating until all tests pass

This is exactly the Test Driven Development strategy taught to a generation of coders, and AlphaCodium achieved a 44% completion rate on the CodeContests evaluation, beating Google’s well funded fine tuned AlphaCode2 at 43%. Both would outrank more than 85% of all human competitors.

To reiterate, with some heavily structured prompting, your machine can beat (most) humans at writing computer code by deploying the exact same strategies humans use to write good code.

And no, it’s still not AGI.

🗞️ Things Happen

  • Rabbit and Perplexity fall in ❤️ on X, and sign a partnership deal for Perplexity to provide an answer engine on the Rabbit’s R1 AI Companion.

  • Valve has opened the door to generative AI at the Steam store.. finally, as the it’s now clear that most of the low level grunge work in the game industry will go the way of the dodo. One can only hope for more creativity as the economic hurdles to expression fall

  • 3D motion capture with just cameras, no special suits, no markers, from Move AI

  • Open source model fine tuning expert Teknium1 finds a bug in Nous Research’s recent Mixtral-Hermes release. He notes, it “can't be reproduced on known working inference“. This remains one of the major problems of working in generative AI, that reliability, even to the extent of reproducing bugs, can be much more difficult than creativity.

🖼️ AI Artwork Of The Day

Sunflower plant with flowers made of sunny side up eggs

Eggplant by u/ansmo (r/StableDiffusion)

That’s it for today! Become a subscriber for daily breakdowns of what’s happening in the AI world:

Reply

or to participate.