Continuing on my theme of technology and the potential dangers of Ai from my last article, in the above video, Bryan Osborne of Answers In Genesis shares his analysis on the newest technological breakthrough of ChatGPT.
According to Wikipedia, ChatGPT is: “…an artificial intelligence chatbot developed by OpenAI and launched on November 30, 2022. It is notable for enabling users to refine and steer a conversation towards a desired length, format, style, level of detail, and language used. Successive prompts and replies are taken into account at each stage of the conversation as a context.”
So basically, you can ask it more complicated questions and have it create things for you. Some have used it to create resume’s, songs, and stories; troubleshoot cars and household appliances; and even get quick answers to questions about anything. It is an interesting tool but it has limitations. According to the OpenAI website, these limitations are:
- ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers. Fixing this issue is challenging, as: (1) during RL training, there’s currently no source of truth; (2) training the model to be more cautious causes it to decline questions that it can answer correctly; and (3) supervised training misleads the model because the ideal answer depends on what the model knows, rather than what the human demonstrator knows.
- ChatGPT is sensitive to tweaks to the input phrasing or attempting the same prompt multiple times. For example, given one phrasing of a question, the model can claim to not know the answer, but given a slight rephrase, can answer correctly.
- The model is often excessively verbose and overuses certain phrases, such as restating that it’s a language model trained by OpenAI. These issues arise from biases in the training data (trainers prefer longer answers that look more comprehensive) and well-known over-optimization issues.1,2
- Ideally, the model would ask clarifying questions when the user provided an ambiguous query. Instead, our current models usually guess what the user intended.
- While we’ve made efforts to make the model refuse inappropriate requests, it will sometimes respond to harmful instructions or exhibit biased behavior. We’re using the Moderation API to warn or block certain types of unsafe content, but we expect it to have some false negatives and positives for now. We’re eager to collect user feedback to aid our ongoing work to improve this system.
What Osborne finds though is that there are way more flaws with the software than anticipated. Why? Because there are so many vast human opinions than ever before; and that is where the chatbot gets its information.
No doubt, the internet has become the new Tower of Babel. We must be ever diligent to get our information from trusted sources. In a world of knowledge, we must have discernment and use critical thinking to make sure what we see on the internet is what is true. Watch the video to learn more about the dangers of ChatGPT.

Leave a comment