ChatGPT: What Can’t It Do?

Written by Tamar Peterson
Technology
3 mins read
ChatGPT What Can’t It Do_In

ChatGPT seems like the ultimate get-out-of-work-free card. It can help you launch a million-dollar-grossing business, write a stern payment reminder for an overdue client, and even code. OpenAI, the company behind ChatGPT, is selling the dream of optimal results with minimal effort.


It’s divisive, to say the least. Fans of the artificial intelligence program are elated at the prospect of ChatGPT transforming (and upending) the landscape of written work, from CX interactions to marketing copy. Generative AI is the future, they say. Some have joked that ChatGPT’s intelligence is a bit too human-like, as this satirical job post for a “killswitch” position suggests. All said, it seems like an incredibly powerful tool. But when do ChatGPT’s talents hit a wall?


First let’s talk about what makes ChatGPT impressive.


The team at OpenAI, riding the high of the public’s excitement and a huge influx of cash from Microsoft, is pushing the narrative that there’s very little ChatGPT can’t do, and that its capabilities will only improve. More data to analyze will lead to more accuracy. Perfection (and human-like intelligence) seems to be just a matter of time.


When it comes to specific claims about what ChatGPT can do…


  • It can answer questions you’d usually Google, providing a more succinct, digestible result than a typical search engine. Instead of being given a list of webpage results, ChatGPT spits out a pithy paragraph to explain. If you have follow-up questions, you can ask them.
  • OpenAI reps claim that ChatGPT is the ultimate writing tool, capable of crafting ebooks and reports on any given topic. If true, this would free up a lot of time in the writing process. Research, outlining, and drafting would be reduced to nil. The idea is that you’d only have to set aside time for editing ChatGPT’s draft. Marketing materials like company sponsored ebooks and blog posts could be outsourced to AI, rather than copywriters on staff.
  • ChatGPT also appears capable of writing code. This raises the possibility of organizations outsourcing software development to AI, now or in the future as the program evolves.

Given these large claims, and some promising real-world uses, ChatGPT appears to be a resource with huge potential.


Like any tool, we need to understand ChatGPT’s limitations before we can get the most out of it. So let’s take a look at some of its biggest shortcomings, because despite what OpenAI’s fanbase would like you to think, ChatGPT does have some pitfalls.

1. ChatGPT lies without realizing it

By its own admission, ChatGPT has no idea if what it says is true. In its own words:


I strive to provide accurate and helpful information to the best of my ability. However, it's important to note that the accuracy of my responses depends on the accuracy and completeness of the information that has been presented to me.If the information provided to me is inaccurate or incomplete, my responses may reflect that.
[emphasis mine]


ChatGPGT relies on analyzing common responses to the words found in your prompt. In other words, it depends on data humans themselves have written. When AI synthesizes enough inaccurate information, it will generate inaccuracies. This gets at something Ezra Klein discussed on his podcast with AI researcher Gary Marcus: yes, it can be handy to use ChatGPT in lieu of a Google search, but the information it generates could be entirely wrong. Relying on ChatGPT for information about medical concerns, for example, can have dangerous results.


And what makes the situation so tricky is that when humans say things that aren’t true, there are usually tells. Garbage information often sounds like garbage. But ChatGPT has the uncanny ability to make false information sound believable because its tone is confident. And that can be enough for some people to take ChatGPT’s words at face value. Don’t be duped!

2. ChatGPT fails at creativity

Nothing ChatGPT says is original. Because it relies on material humans have created, everything it writes is derivative, literally derived from someone else’s work, without giving them credit. There’s some debate about whether this constitutes plagiarism or intellectual theft. But regardless if it’s unethical, it’s bad for business.


Think about your org’s strategy for things like sales, marketing, or branding. What makes the copy effective? Chances are your copy is (or should be) showcasing the unique qualities of your product and targeting the unique desires of your prospect. Effective sales sets your offerings apart from the competition and hones in on your ideal consumer. Effective branding tells a story that is special to your company and your mission.


These are specific details you cannot replicate with generalized copy derived from other people. This means ChatGPT is ineffective for marketing or sales copy. Its stories are not compelling. Its marketing falls flat. Its sales will only work on people who aren’t picky about the products they buy and services they use, and which vendors they get them from—one-time customers who are not likely to become brand loyalists you can depend on.

3. ChatGPT’s code is nonsense

 When it comes to code, ChatGPT can be useful, but it can also generate downright gibberish. While the program appears capable of writing brief lines of code in just about any language (and has some debugging abilities), it can also go off the rails, suggesting solutions that don’t exist. Take a look at this example, where a software developer asks ChatGPT to write code in Python for spatial data analysis:


“I was quite impressed with the response, especially its suggestion to use gpd.gridify(), a geopandas attribute I’ve never heard of before. However, when I tried to run the code, I found out that the module geopandas has no attribute gridify. In other words, the chat bot suggested for me to use a tool that looks really handy, but doesn’t actually exist.”


ChatGPT’s code doesn’t work because it makes up an attribute that doesn’t exist. When the human programmer tries to correct ChatGPT, it digs in its heels and insists that the human expert is mistaken. The author concludes that the goal of AI tools like ChatGPT is to mimic the appearance of truth, without necessarily being true.


 This judgment is echoed by other software developers on discussion boards: 


“This is the scary thing about this - I keep asking it questions, and it gives confident, lengthy responses that look perfect. But when you read it thoroughly, you start noticing small but important mistakes that are not obvious at first.”


ChatGPT’s coding abilities amount to appearing plausible but having big problems.


These things are currently outside ChatGPT’s abilities: saying things that are true, writing unique copy, and coding. Now that we have a handle on the things ChatGPT can’t reliably do, we can explore tasks ChatGPT can be useful for.

Stay tuned for a follow-up on ChatGPT’s best uses and other AI tools that can aid productivity. Want to be notified? Sign up for our blog newsletter.


Interested in hiring talented Latin American nearshore developers to add capacity to your team? Contact Jobsity: the nearshore staff augmentation choice for U.S. companies.

Share
linkedIn icon
Written by Tamar Peterson
LinkedIn

Tamar Peterson is a digital marketer with a background in copywriting, arts programming, and public education. Her work has included teaching writing to diverse groups, advancing employment equity within corporate culture, and serving as an editor for critically acclaimed literary magazines. Tamar is currently the Content Marketing Leader at Jobsity, where she crafts messaging and manages performance of marketing initiatives, sales communications, and the company brand.

Stay up to date with the news.