OpenAI Q* (Q-Star) Aims to Refine Gobble Selection from the Gen AI Gobbledygook

Well, the secret is apparently out and OpenAI and Microsoft has played their hand in the Sam Altman reality show which has played-out in Silicon Valley recently.

And so, this has attracted so much of the world's attention (including my own) that I've been taking time off from my daily "old man gym rat" routine lifting weight, cardio and eating healthy just to write this public post! Never mind we should eat our veggies before the protein and save the simple carbs for last and save our pancreas! Never mind over 30% of Americans have diabetes or are pre-diabetic. We have OpenAI and the Sam Altman, OpenAI, doomsday AGI saga to focus on! Yay! AGI will tell us how to eat to avoid metabolic syndromes, which are true silent killers of us all, since most of us have no idea how to even eat the right whole foods, in the correct order, and exercise properly!

As many of you may know or recall, I was at the top of the OpenAI Developers forum leaderboard until I resigned in protest to how OpenAI mismanaged the developer forum. Basically, OpenAI did not like me being the "top dog" in the dev forum, so the only sane option was to move on. So, I had a lot of inside, hands-on experience programming using the OpenAI APIs GPT until I departed around April of this year.

During that time, working hands-on with the core OpenAI generative AI tools, I learned first hand that these LLMs generate gobbledygook. That's what they do best. ChatGPT (for example) is based on a very huge, OpenAI trained LLM which performs as a current-generation text-autocompletion engine.

Of course, attempting to explain this to people who worship ChatGPT and other LLMs as a kind of god naturally falls on deaf ears. Trying to inform people who somehow believe that a well designed gobbledygook generator is their savior is not an easy road to travel.

So, here we are. Generative AI generates so much gobbledygook that OpenAI has leaked (to distract us from the Sam Altman saga) Q* (Q-Star). Those who have analyzed this leak speculate that the new hype is that a step toward AGI is a supervisory layer which selects the best gobble from the gook.

That's the big "next step" in AGI, at least according to the AI influencers on YT who have speculated on what Q* does based on trying to solve the well known generative AI "cannot solve simple math" problem.

Of course a multi-billion dollar text auto-completion engine cannot solve simple math. That was obvious at the beginning of 2023. So, to try to fix this basic problem, OpenAI seems to be working on a better selection process where, I speculate, GenAI generates gobbledygok at near light speed; and the gobble selection engines find the needle (the best gobble) from the gook (the haystack).

At first I was surprised to learn that OpenAI and other ML engineers were so excited about trying to find the signal-in-the-noise from endless various gobble probabilities, but it makes perfect sense when you think about it. Machines generate near-infinite gobble and a selection process selects the best one.

When you have a successful commercial generative-AI product and the world is watching, while at the same time hyping AGI, its hard to sound intelligent, from a data-science perspective, when the best you have is a gobbledygook generator and your code cannot perform simple math!

So, these same machine learning folks have seemingly decided, perhaps desperately in my view, that the next big breakthrough toward AGI is to have GenAI generate countless gobbledygook possibilities and have a supervisory layer to rack-and-stack to select the most-likely correct gobble or two.

Some people are actually referring to this gobbledygook generator as "thinking". Do you think this is a viable step toward AGI? Please reply with your thoughts!

As for me, I remain open minded.

Maybe this is a great "first step" toward AGI?

Let us know what you think.

Self References

Example Outside Reference

1 Like

this reminds me when 37 years ago, I was a specialised maths teacher for children with difficulties, having to argue with the father of a boy who was autist and trying to make me accept he was more than brilliant with a QI of 240...
I replied that he could have a QI of 300, I would not change my mind, for me the intelligence is how good you are to be able to adapt and survive in an environment you see for the first time, in other words, what would be the chances of survival of his son, if I left him alone in a forest at 30km from where he lived ( that would be almost at the forest limit of the jura, french side where there is nothing before 10km going North with a bit of luck...) he admitted that he would not have much chance, and then depending of the season?...

Being able to learn and use what you learned is only a (little?) part of what you call intelligence with reasoning, thinking and elaborating new theories--- is more than that...

2 Likes

For me, as stated above, I cannot predict where this current baby-steps toward AGI is headed; but there does seem some interesting possibilities given enough computing power.

Generate a very large number of responses using one or more generative AI LLMs and then develop a selection-supervisory process (on top of that) to rank the responses generated.

Not sure if the results would be an "Artificial General Intelligence" (AGI), but there is a finite probability it could be much better than the current LLMs (like ChatGPT) out there.

2 Likes

progress by experimentation - even if the experiments start off as brute force and ignorance thats nothing new.

2 Likes

Amazing how people try to use LLMs to solve problems with require accuracy, not fictional auto-completion.

Here is a great article on CNN:

ChatGPT struggles to answer medical questions, new research finds

It's truly amazing how so many professional people do not understand that generative-AI LLMs are simply modern auto-completion engines which do not reason nor provide any expertise other than predicting text.

When the researchers asked the chatbot for scientific references to support each of its responses, they found that the software could provide them for only eight of the questions they asked. And in each case, they were surprised to find that ChatGPT was fabricating references.

At first glance, the citations looked legitimate: They were often formatted appropriately, provided URLs and were listed under legitimate scientific journals. But when the team attempted to find the referenced articles, they realized that ChatGPT had given them fictional citations.

Not sure why these "researchers" were surprised. LLMs generate references based on text prediction, not on legitimate research!

I don't think the Q* high expectations are realistic if it is going to generate gobbledygook at wire/light speed and "select the best" made-up, fictional references to support it's made-up fictional text-completion, to be honest.

Generative AI LLMs are simply not the right model for serious factual content creation.

2 Likes

amazing how 'normal'/'intelligent' ignore the basics so often , as Chuck Dee and friends warns ...
image

2 Likes

IMHO the golden retriever has much greater AGI then all those computers bundled together :wink:

2 Likes

Haha, maybes, I will need empirical evidence :rofl:

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.