Well, the secret is apparently out and OpenAI and Microsoft has played their hand in the Sam Altman reality show which has played-out in Silicon Valley recently.
And so, this has attracted so much of the world's attention (including my own) that I've been taking time off from my daily "old man gym rat" routine lifting weight, cardio and eating healthy just to write this public post! Never mind we should eat our veggies before the protein and save the simple carbs for last and save our pancreas! Never mind over 30% of Americans have diabetes or are pre-diabetic. We have OpenAI and the Sam Altman, OpenAI, doomsday AGI saga to focus on! Yay! AGI will tell us how to eat to avoid metabolic syndromes, which are true silent killers of us all, since most of us have no idea how to even eat the right whole foods, in the correct order, and exercise properly!
As many of you may know or recall, I was at the top of the OpenAI Developers forum leaderboard until I resigned in protest to how OpenAI mismanaged the developer forum. Basically, OpenAI did not like me being the "top dog" in the dev forum, so the only sane option was to move on. So, I had a lot of inside, hands-on experience programming using the OpenAI APIs GPT until I departed around April of this year.
During that time, working hands-on with the core OpenAI generative AI tools, I learned first hand that these LLMs generate gobbledygook. That's what they do best. ChatGPT (for example) is based on a very huge, OpenAI trained LLM which performs as a current-generation text-autocompletion engine.
Of course, attempting to explain this to people who worship ChatGPT and other LLMs as a kind of god naturally falls on deaf ears. Trying to inform people who somehow believe that a well designed gobbledygook generator is their savior is not an easy road to travel.
So, here we are. Generative AI generates so much gobbledygook that OpenAI has leaked (to distract us from the Sam Altman saga) Q* (Q-Star). Those who have analyzed this leak speculate that the new hype is that a step toward AGI is a supervisory layer which selects the best gobble from the gook.
That's the big "next step" in AGI, at least according to the AI influencers on YT who have speculated on what Q* does based on trying to solve the well known generative AI "cannot solve simple math" problem.
Of course a multi-billion dollar text auto-completion engine cannot solve simple math. That was obvious at the beginning of 2023. So, to try to fix this basic problem, OpenAI seems to be working on a better selection process where, I speculate, GenAI generates gobbledygok at near light speed; and the gobble selection engines find the needle (the best gobble) from the gook (the haystack).
At first I was surprised to learn that OpenAI and other ML engineers were so excited about trying to find the signal-in-the-noise from endless various gobble probabilities, but it makes perfect sense when you think about it. Machines generate near-infinite gobble and a selection process selects the best one.
When you have a successful commercial generative-AI product and the world is watching, while at the same time hyping AGI, its hard to sound intelligent, from a data-science perspective, when the best you have is a gobbledygook generator and your code cannot perform simple math!
So, these same machine learning folks have seemingly decided, perhaps desperately in my view, that the next big breakthrough toward AGI is to have GenAI generate countless gobbledygook possibilities and have a supervisory layer to rack-and-stack to select the most-likely correct gobble or two.
Some people are actually referring to this gobbledygook generator as "thinking". Do you think this is a viable step toward AGI? Please reply with your thoughts!
As for me, I remain open minded.
Maybe this is a great "first step" toward AGI?
Let us know what you think.