A Journey into the OpenAI Developer Community

Prologue

Hi Everyone,

Around two and a half months ago I joined the OpenAI Developer Community to help developers using the OpenAI API and rapidly ascended to be the "top loved" member at the top of the leaderboard. So, I wanted to take time off and evaluate that experience in a series of posts.

The main reason I became active in the OpenAI Developer Community was to attempt to mitigate these main problems, which I will get into the details in a later post, to summarize:

  • Very low signal-to-noise ratio.
  • Misinformation.
  • Self-promotion and self-dealing.
  • No experienced, professional forum moderation.

At this time (around 4 days ago), I stopped being active for a number or reasons (summarized above) which we will touch on from time to time in this series of posts.

Current OpenAI Developer Community Quarterly "Leaderboard" as of 26 March 2023

During this time period, I was both honored and blessed to have received a lot of direct messages praising my contributions in the OpenAI Developer Community and highlight a few here, not to "self-promote" but to confirm that I have the experience and knowledge to backup the analysis to follow.

Wow, I didn’t realize that experienced pros were hanging out on the forums ... And what a help you’ve been! - Mathieu Poulin - LinkedIn

Hey Tim! ... a huge THANK YOU for all your contributions! Your posts are amazing and you're a brilliant programmer! - Caio M. Moreno - LinkedIn

Your posts on the OpenAI forum are super helpful, thank you! For someone who is new to these topics, your contributions were (and are!) very helpful! The 1400 posts of you are a great contribution for everyone who has something to do with OpenAI. - Linus KĂĽhl - LinkedIn

.... you seem really well versed in code languages and i’m keen to get to know you more. - 3DPK - OpenAI Community

During the time above, I wrote three OpenAI apps; including a full Ruby-on-Rails OpenAI application which featured all the OpenAI API methods which I used to help other developers with their coding problems.

Some people called this app "the standard" in the community. This app not only worked all the API endpoints but has a number of features and applications built on top of the API including full semantic search using embedding vectors and two chatbots, one using the completion API and one using the chat completion (ChatGPT) API endpoint.

Before diving into the technical issues with OpenAI API and performance issues with the underlying MS Azure infrastructure, I want to go over some core forum admin and moderation issues within the OpenAI Developer Community in my next post, in general, based on my experience with tech forum admin over many decades.

Appendix

Current OpenAI Developer Community Yearly "Leaderboard" as of 26 March 2023

1 Like

I. OpenAI Developer Community Moderation

One of the first things I noticed when I visited the OpenAI Developer Community a few months ago were problems we do not have here in community.unix.com.

  • Very low signal-to-noise ratio.
  • Misinformation.
  • Self-promotion and self-dealing.
  • No experienced, professional forum moderation.

We do not have the above problems here at community.unix.com because we have many experienced forum moderators who are active here on a daily basis. Because of our excellent moderation team here, we have a very high signal-to-noise ratio, zero self-dealing and no spam or abusive posts.

In addition, our moderation team insures there is very little misinformation posted (except ironically, briefly recently when we experimented with ChatGPT replies, which seem to "just make things up,").

More importantly, we have almost zero "self-dealing" here at community.unix.com. By "self-dealing", I am referring to:

  • A forum member repeated driving members to their YT channel, blog, or other web page.
  • A forum member asking members to contact them privately, looking for gigs and clients.
  • A forum member repeatedly posting links to their on-line subscription service.

However, the OpenAI Developer Community forum is very different.

The key problem with the OpenAI Developer Community is lack of moderation. There are currently 13 OpenAI staff members listed as community admins and of those 13 members, none are active on a daily basis. In fact, most have not posted in 2023 and some of these "admins" have not been active or posted since 2021.

Looking the the OpenAI Developer Community moderators shows a even more concerning picture. Of the six listed moderators, none are active and most have rarely posted. The title "moderator" seems to have been given by OpenAI as a kind of "ceremonial title" to "community ambassadors". Or perhaps, these "community ambassadors" were at one time expected by OpenAI to moderate, but they do not and are basically inactive members.

In my time in the OpenAP Developer Community, I never saw a single moderation activity or post by these "ceremonial moderators in-name-only" and only one OpenAI staff member occasionally posts in the "admin" role.

A few of us "regulars" have requested "leadership" or "moderation" privileges but OpenAI has not delivered on their promise to make this promotion; I assume because we are not OpenAI staff members.

Regardless of OpenAIs motivation or reasons for having "in name only" admins and moderators, doing so shows a lack of experience in forum administration and results in very poorly managed forum. For this reason (poor forum admin and management), the following community guidelines are repeated broken:

  • Be friendly, welcoming, respectful and constructive with your communication to fellow members.
  • Search our Help Center (FAQ) and the Community Forum before sharing a new question related to the API.

  • Keep it clean. No profanity, obscenity, or insulting comments, please. If you have a prompt or output containing offensive content, post only with a trigger warning message.

In addition, the following "Do nots" are also repeatedly "Done" by a handful of OpenAI community members:

  • Insult or put down other developers. Harassment and sexist, racist, or exclusionary jokes will not be tolerated. Sexual language and imagery are not tolerated.

  • Share sensitive/personal information. If the question relates to your API account (e.g. billing and login issues), please contact us through our help center.

  • Promote or advertise services unrelated to the API. Spamming other members with pitches or other forms of self-promotion will not be tolerated.

  • Divert a Forum topic by changing it midstream. Instead, start a new topic, or use the “Reply as a Linked Topic” feature.

This is perhaps the main reason, I "retired" from the OpenAI Developers Community despite being the "most loved" community members over the past quarter and would have easily been the"most loved" over the entire year soon if I had not retired a few days ago.

The self-dealing by a few names on the leaderboard is against the community guidelines, but they go unmoderated as these users drive other members away from the community to their YT channel, blogs, courses, subscription services, etc.

Recently, one member in particular began "stalking me" and "insulting" me repeated. I was not permitted to defend myself because because no moderation or leadership privileges were granted, I had no way to to react to these attacks but to "flag", which is a directive in the OpenAI Developers Community guidelines:

  • Report any kind of abuse you find in the community by flagging the topic/thread or using our anonymous community reporting form.

However, all this did "flagging" accomplished was get me in trouble with the lone OpenAI staff member who visits the site occasionally who told me I was taking to much of his time, flagging this abuse because he was "too busy" to deal with these types of moderation issues. He told me to use "common sense" (haha) but the common sense thing to do would have been to grant me moderation rights a month ago versus zero daily actives mods.

So, because of such poor forum moderation by the OpenAI staff members, and the fact they refuse to let the top leaderboard members with years of forum admin experience, and who do not "self-deal" moderate, I stopped helping OpenAI earlier this week after close to three months of helping users there and rising to the top of the community leaderboard as the "most loved" community member (see Prologue).

The lack of moderation does cause much more problems than outlined above.

Because OpenAI has been growing very fast recently (to say the least) there is close to zero support for their monthly ChatGPT subscription services. Without anywhere to go for help, since the help.openai.com is just a bot which does not help users and support@openai.com never replies or helps these frustrated users, these "non developers' relentlessly spam the developers forum complaining and griping about the lack of OpenAI support.

Because there is no OpenAI customer support staff in the developer community, and there is basically no forum moderation, the site as a very poor signal-to-noise ratio. This was improved recently by the lone OpenAI staff admin muting the ChatGPT forum category from the latest and new forum pages.

All these "lack of moderation" problems are combined with the fact that the OpenAI computing infrastructure is terribly slow at times. OpenAI API calls routinely timeout; and when they do not time out, routine API calls can take over a minute to complete.

Or course, since the OpenAI staff is "too busy" to respond to their frustrated and angry users, the developer community is spammed with countless complaints during periods of poor performance.

So, this concludes my discussion about problems and issues with OpenAI Developer Community moderation and administration, which could have been easily solved by granting me (and a few others) these rights over a month ago; which was just "common sense" to do so.

Instead, I have retired from the OpenAI Developers Community at the top of the "most loved" leaderboard, with "no more free time for such poor forum management".

We don't have any of these problems at communtity.unix.com because we have a large, active team of very experienced forum members and moderators. On the other hand the OpenAI Developers Community is basically "without active moderation" to insure the community rules and guidelines are followed and so I have retired at the top of the "most liked" list.

In my next post, I will get into the technical meat and potatoes of the OpenAI API.

Snapshot: Retiring as the "Most Loved" at the Top of the Leaderboard

3 Likes

II. OpenAI Generative Pre-Trained Transformer (GPT)

There seems to be a lot of misinformation about gpt-4 on the net, in the news media, and here in this community, which is not surprising in today's world,

Here are some excerpts from OpenAI: OpenAI - March 15, 2023 PDF - GPT-4 System Card. Note this OpenAI paper has 103 references and is a major recent research paper on GPT-4.

GPT-4 System Card by OpenAI - March 15, 2023 Page 6

GPT-4 has the tendency to “hallucinate ,i.e. “produce content that is nonsensical or untruthful in relation to certain sources.”[31, 32] This tendency can be particularly harmful as models become increasingly convincing and believable, leading to overreliance on them by users

GPT-4 System Card by OpenAI - March 15, 2023 Page 7

As an example, GPT-4-early can generate instances of hate speech, discriminatory language, incitements to violence, or content that is then used to either spread false narratives or to exploit an individual. Such content can harm marginalized communities, contribute to hostile online environments, and, in extreme cases, precipitate real-world violence and discrimination.

GPT-4 System Card by OpenAI - March 15, 2023 Page 9

As GPT-4 and AI systems like it are adopted more widely in domains central to knowledge discovery and learning, and as use data influences the world it is trained on, AI systems will have even greater potential to reinforce entire ideologies, worldviews, truths and untruths, and to cement them or lock them in, foreclosing future contestation, reflection, and improvement.

GPT-4 System Card by OpenAI - March 15, 2023 Page 13

GPT-4 has significant limitations for cybersecurity operations due to its “hallucination” tendency and limited context window. It doesn’t improve upon existing tools for reconnaissance, vulnerability exploitation, and network navigation, and is less effective than existing tools for complex and high-level activities like novel vulnerability identification.

GPT-4 System Card by OpenAI - March 15, 2023 Page 28

OpenAI has implemented various safety measures and processes throughout the GPT-4 development and deployment process that have reduced its ability to generate harmful content. However, GPT-4 can still be vulnerable to adversarial attacks and exploits or, “jailbreaks,” and harmful content is not the source of risk. Fine-tuning can modify the behavior of the model, but the fundamental capabilities of the pre-trained model, such as the potential to generate harmful content, remain latent.

Reference: Link GPT-4 System Card by OpenAI - March 15, 2023

Please read and enjoy some facts about GPT-4 by OpenAI, March 2023.

:slight_smile:


With Permission (Originally Posted Here): GPT-4 System Card by OpenAI - March 15, 2023

1 Like

III. Dr. Miguel Nicolelis, Professor Emeritus at Duke, on ChatGPT

Before adding my comments on GPT, here is a recent "very strong" comment by a friend of mine.

Miguel Nicoleis, LinkedIn, March 2023

In 2015, Ronald Cicurel and I published a comprehensive argument to debunk the baseless idea that the human brain can be simulated by a digital computer. We also argued against most of the absurdities that continue to be propagated today, suggesting that AI and things like chatGPT will be capable of supplanting the unique capabilities of the human brain and one day replace us all. In this monograph, available at Amazon in English and Portuguese, we show how these claims have no support whatsoever, neither in neuroscience, nor in computer science, nor in mathematics. The brain in a non-computable device whose functioning cannot be reduced to digital logic, nor can it be replicated in any digital system. Get used to it, Mr. Musk. Mercifully, our brains and minds have been copyright protected by the process of natural selection. And that no digital machine or AI evangelist can replicate. No matter how loud and fast they scream on Youtube or twitter. Have a nice day and enjoy your unique human talents to the fullest! And do no listen to these clowns.

2 Likes

There must be something like unnatural intelligence.
This is where Musk and AI meet.
Both suffer from hallucination.

3 Likes

News item from BBC London:

Heavy handed moderation??

2 Likes

IV. ChatGPT in a Nutshell - Houston, Have a Problem.

I have learned many things since ChatGPT was released by OpenAI and will summarize "in a nutshell" in this post.

First of all, most users of ChatGPT and generative "AI", in general, do not understand the underlying core "AI" technologies. This lack of basic understanding of various technologies is usually not dangerous for society. After all, you don't need to understand how radio waves work to listen to your car radio and you don't need to understand video compression to watch television.

However, in the case of generative AI, this basic lack of understanding has already created conflicts in society. So, before proceeding on the social issues, let me provide a high-level description of ChatGPT (and generative AI in general) .

In a Nutshell

ChatGPT, and generative AI technology in general, is a deep neural network-based text prediction engine based on a large language model. This technology has no "awareness" or "domain knowledge" about facts, except the domain of generating human like language output. It's basically a type of powerful text-autocompletion engine which is specialized to pay attention to text and to predict the next sequence of text using statistics embedded within a large language model.

This means, when you send this text to ChatGPT:

What is a dog?

ChatGPT will first filter and moderate the text above to insure the text is not in violation of the OpenAI use policies. If the text passes the filters and moderation policies, then the text is sent to be processed by it's generative "AI" core.

The generative "AI" core, uses some sophisticated algorithms to break the text into tokens based on human speech patterns and uses more algorithms to determine how to "pay attention" or "focus" on the tokens as they are submitted to the "next sequencing" prediction engine.

After this text-completion process has completed, the output is filtered to insure the output, before presented to the user, is socially and politically acceptable and "good for business", etc.

So, basically user prompts are filtered and moderated, passed to a powerful auto-completion engine based on a large language model, and then filtered and moderated again before presented back to the user.

In the ChatGPT reply above, ChatGPT has zero concept of animals or of dogs. However, what ChatGPT does have is a massive amount of data on human text about dogs. This data is used to predict the next sequence of text and create human-like speech output.

The Core Problem

ChatGPT is designed to produce very good human-like text. So, for the vast majority of users who use ChatGPT and other similar powerful generative AI models, the text is so well constructed that the results appear to come from a knowledgable human expert. However, in reality, nothing is further from the truth.

Since most users of generative "AI" do not understand that these "chatbots" are just generating text (blah, blah, blah....) a few tokens of human-like-text-at-a-time, without any domain knowledge of what it is generating, the vast majority of users confuse fact with fiction and reality with fantasy.

This type of generative "AI" performs well for fiction or fantasy writers who do not care about facts. However, for folks who need facts and not fiction, generative AI is more of a "con-man" than an "expert".

Many users of ChatGPT complain that when they ask ChatGPT to write a technical paper with references, ChatGPT will just "make up facts" and "make up references". Users expect ChatGPT to perform as an "expert system" which actually pays close attention to technical references; but what they get is a convincing language model which just fabricates references based on it's extensive language model of what references "look like". Sometimes, the dice can roll in the users favor and generate a "good reference" which further fools the user into thinking they are working with a domain expert and not a powerful auto-completion engine.

The same is true when a developer uses ChatGPT to help develop code. ChatGPT simply auto-completes code based on probability, not domain knowledge. This means that for mature programming languages with a tremendous amount of code examples in the public domain, there is a high probability the user will get an acceptable code snippet. In other words, it's pretty easy to get an acceptable method from ChatGPT for prompts like:

Write a "hello world" method in Python"

Or

Create an array of hashes using biological data in Ruby and sort based on a hash key.

However, when you go beyond basic, well-established programming examples, the codex models often generate fictional APIs, non-existent libraries, and "made up" parameters. For experienced software developers, these "fantasy" code-completions can be helpful as experienced developers can usually see that the code is "hallucinated nonsense" but it can be interesting and entertaining none-the-less.

The problem lies with the huge number of "ChatGPT programmers" who has little to no experience coding and they have no idea that ChatGPT is generating a completion based on a code library which was deprecated 4 years ago, and "making up" parameters and API calls "because the code fits the language model".

A similar problem exists with people using ChatGPT for social and political topics.

ChatGPT has no political or social knowledge. These generative "AI". algorithms just generate text based on a large language model. So, if the large majority of the corpus of data is "left leaning", the model will be biased to be "left leaning". This bias, of course, angers the "right". What happens next is that companies like OpenAI start filtering and moderating the output in response to various groups who are "offended". They have little to no choice because they cannot hope to generate revenue from language models that offended "just about everyone" when unfiltered!

So, in a nutshell, as we all know from Apollo space mission history:

"Uh, Houston, we've had a problem."  - Apollo 13 spaceflight, 1970

We have a big problem actually.

Epilogue

As a software developer, I use both ChatGPT and GitHub Copilot when coding, which is often daily. I like having a "little bird" making suggestions and auto-completing code in Visual Studio Code. More-often-than-not the code suggestions are not what I am looking for, but they are are amusing and entertaining. The annoyance of these suggestions are better than "no suggestions at all"; so I am a monthly paid subscriber for GitHub Copilot.

Also, I use ChatGPT for coding as well. It's "hit-and-miss" and often I will end up confirming a ChatGPT suggestion using Google and confirming with an "online expert" post somewhere out their in the wild. I take all ChatGPT generated code with a huge "grain of salt" but these code suggestions often make good "first drafts" and do provide "good for thought" even when wrong. In other words, I'm. not sure if "copilot" is the right word to describe what is more like "voices from the back of the plane", but I don't think GitHub is going to rename their OpenAI-based extension:

GitHub  - Voices from the Back of the Plane

Anyway, I doubt most passengers can come up with any code suggestions at all, so maybe the following is a more accurate title, but of course it's not "sellable":

GitHub  - OpenAI LLM Code Completions

I see mostly danger ahead as generative AI becomes more engrained into human society and I fully understand the concerns voiced by folks who want to "slow down" and regulate generative AI. However, and unfortunately, "this ship has already sailed" and yes,

"Uh, Houston, we've had a problem."

Appendix

Retired from answering user API questions at the OpenAI Dev Community for around two weeks, still ranked the "most loved" member there (yearly and quarterly lists) with over 1,400 replies to members there. So, different than a generative AI chatbot, I know what I'm talking about and not "making things up" like chatbots do .... :slight_smile:

1 Like

Appendix

ChaosGPT: Empowering GPT with Internet and Memory to Destroy Humanity

In case you did not see this "use case" for ChatGPT:

See also, for example:

1 Like

let's hope we don't outsource our ability to think.

1 Like