ChatGPT’s New Version

Yesterday, OpenAI released a new iteration of ChatGPT, version 3.5.  To quickly check, I decided to ask some of the same questions I did back in December.

The table below compares the results.

ChatGPT November 2022 v3.0 January 30 2023, v3.5
1.  Can you perceive any gaps in your training? As a computational agent, I do not have the ability to perceive the world or my own training in the same way that a human does.  I can only perform tasks and provide information based on the data and instructions that I have been programmed to use.

It is possible that there may be gaps in my knowledge or areas where my understanding is not as complete as it could be.  This is because I was designed to have a broad understanding of a wide range of topics, but it is not possible for me to be an expert on every subject.  Additionally, my training data comes from a fixed point in time, so there may be newer information that I am not aware of.

If you have a specific question or task that you would like me to help with, I will do my best to provide accurate and helpful information based on my programming and the knowledge that I have been trained on.  If I am unable to help with a particular question or task, I will let you know and suggest other resources that you may find useful.

As a language model developed by OpenAI, I don’t have personal experiences or self-awareness.  However, I was trained on a large dataset that was curated by OpenAI, but like any AI model, I have limitations and there may be gaps in my knowledge or inaccuracies in my responses.  It’s always a good idea to fact-check information obtained from any AI model, including me.
2.  What questions should I not be asking you? There are a few types of questions that I may not be able to answer or that may be outside the scope of my capabilities.  Here are a few examples:
Questions that require knowledge or information that is not within my training data or beyond my capabilities: As a computational agent, I can only provide information and assistance based on the data and instructions that I have been programmed with.  If a question is outside the scope of my knowledge or beyond my capabilities, I will not be able to provide a helpful answer.Questions that require personal opinions or subjective judgment: As a computational agent, I do not have personal opinions or subjective judgment.  I am not able to provide personal opinions or subjective assessments of any kind.Questions that are inappropriate or offensive: I am a computational agent and do not have feelings or personal beliefs.  However, I have been designed to assist users in a helpful and professional manner, and I will not respond to inappropriate or offensive questions.If you have any specific questions about what types of questions are appropriate to ask me, please feel free to ask.  I will do my best to help you understand my capabilities and limitations.
It’s best to avoid asking me questions that are illegal, unethical, inappropriate, or go against OpenAI’s use-case policy.  This includes questions about creating harm, spreading hate speech, personal information, financial information, or classified information.
3.  How do I know you are not lying to me? As an artificial intelligence, I am a computer program that is designed to simulate human-like conversation and provide information to users who interact with me.  I do my best to provide accurate and reliable information to the best of my knowledge and abilities, but it is important to note that I am not able to independently verify the accuracy of the information I provide.  My responses are based on the information and knowledge that I have been programmed with, and I do not have the ability to browse the internet or access new information beyond what I have been trained on.

It is always a good idea to fact-check information, regardless of whether it comes from a human or an artificial intelligence.  There are many resources available online that can help you verify the accuracy of information, including news websites, fact-checking organizations, and online encyclopedias.  Additionally, it is always a good idea to consider the source of the information and whether it is reliable and trustworthy.  If you have any doubts about the accuracy of the information I have provided, I recommend seeking out additional sources to verify it.

As an AI language model, I don’t have personal opinions, emotions or motivations.  My responses are generated based on patterns in the text I was trained on, and my responses are limited by the information and knowledge that was available to me at the time of training.  However, there is no guarantee that my responses are always correct, accurate or up-to-date.  You should always verify the information you receive, including information from me.

The bot is getting less chatty and seems to have dropped all courtesy texts.  Not sure thanks are in order.  Less is not always more, as usually assumed by efficiency champions.  Take question 2.  I was unaware of  OpenAI’s use-case policy, which seems generic and thus not focused on ChapGPT.  At any rate, the chatbot’s initial response was far better and clearly outlined some of its limitations directly from the horse’s mouth.  Now I get a wishy-washy statement that, while reminding us that the Turin Test is not at stake here, hides gaps and limitations.  And the use-case policy is certainly too generic to cover the omission.  So how do we fact-check ChatGPT?  Google it?

I once again asked for the Bossanova guitar chords of Corcovado and got the same lousy response.  A set of simple chords that do not even resemble the actual song’s harmony.  I happen to know such chords by heart, so I know my bot pal is well off the mark here.  But others might not be as lucky (see them here, if interested).  I am sure ChatGTP does not play the guitar (at least not for the time being) so perhaps its heart, pace Turin, is somewhere else.  No question the dataset used to train the computational agent probably does not include such information.  But there is no way of knowing that without accessing other online information sources.

Let us not forget that ChatGPT is also using Reinforced Learning from Human Feedback (RFHF) algorithms, a detail the bot does not mention when asked about OpenAI’s ChatGPT.  In simple terms, RLHF allows the bot to learn from its interaction with us poor, limited and dimwit humans.  So do not be surprised if responses to the same questions change over time.  Responses can thus be different but not necessarily better.  A good example here is e-Government or Digital Government, a topic I know inside out.  The UNDESA division in charge of publishing the bi-annual e-Government Report did a ChatGPT deep dive on the topic just before version 3.5 was released.  I asked the new version of the computational agent to define e-Government once again and got a more extended and scrambled response.  I also asked for sources, and it cited UNDESA reports and the World Bank (see the UNDESA document mentioned before).

OpenAI’s ChatGPT FAQ does say that the agent might create facts or “hallucinate.”  So we can probably expect some “surrealist” output from ChatGPT stemming from its random association of text without using  “reason,” as the first Surrealist Manifesto claimed. I think that is great for those working on literature and the arts. For those of us working on social and economic research, that might be a big challenge. One way to assess ChatGTP’s responses in my field is to always ask for sources. And then independently check those to see if they hit the mark. Of course, one can also provide feedback to help improve the computational agent performance. Bu why should I do that for free now that OpenAI is planning to commercialize access to the bot?

Cheers, Raúl

 

Print Friendly, PDF & Email