Does ChatGPT Provide Helpful Advice?
If you haven’t heard about ChatGPT, where have you been this past few years? It seems that virtually everyone is talking about it.
ChatGPT, is an example of an artificial intelligence system in the form of a language model that can produce humanlike text. It allows users to ask questions and receive immediate responses. It is perfect for higher education, although educators should become aware of its limitations discussed below and, in particular, the trustworthiness of its responses, before deciding to use it. ChatGPT is now used in many arenas as discussed below.
I have previously blogged about ChatGPT as an artificial language model that is dependent on the data it is fed to make inferences and return accurate information. Using a wide range of internet data, ChatGPT can help users answer questions, write articles, program code, and engage in in-depth conversations on a substantial range of topics. One area of concern is in the ethics of usage as discussed below.
GPT 3.0 was launched in 2020. In November 2022, Open AI introduced a chat interface to the model, GPT 3.5, and according to research the public response was staggering: 90 days later the chatbot had registered over 100 million users. In early March 2023, OpenAI replaced GPT 3.5 with GPT 3.5 Turbo and two weeks later, it launched an advanced version.
ChatGPT 4.0 is now available. The main distinction between GPT-3.5 and 4.0 resides in their scale and capabilities. While GPT-3.5 was trained on 175 billion parameters, GPT-4.0 likely surpasses 100 trillion parameters, indicating a substantial increase in size and sophistication. This improvement enables GPT-4 to provide more nuanced and contextually relevant responses, pushing the limits of natural language processing and establishing new benchmarks for conversational AI systems.
Perceptions About the Benefits of Using ChatGPT
To better understand how people perceive the benefits, it is worth examining its broad-based use. Express Legal Funding conducted a nationwide survey of 100 U.S.-based adults in March 2025. The results offer valuable insight into how Americans are using ChatGPT, what types of advice they trust it to give, and whether they believe it’s a force for good or something more concerning.
Top Insights: How People Use and Trust ChatGPT in 2025
- 60% of U.S. adults say they’ve used ChatGPT for advice or information
- 70% of users found the advice helpful
- Most trusted topics: Career, Education, Product Recommendations
- Least trusted: Legal and Medical Advice
- 34% report they would trust ChatGPT more than an actual human expert
- Only 11.1% believe ChatGPT will improve their finances
- Younger users and iPhone users trust ChatGPT more
- High-income earners and older adults are more skeptical of ChatGPT
- Only 14.1% strongly agree ChatGPT will benefit humanity
It’s not surprising that there is a gap in age disparity with respect to usage. The survey reports that:
- 84% of adults aged 18–29 said they’ve used ChatGPT for advice or information.
- In contrast, only 22.7% of those aged 60 and above have used it.
The survey concludes that these numbers suggest that younger adults — who are often more tech-savvy and open to digital experimentation — are driving the adoption of AI chatbots. Older adults may still be skeptical, unfamiliar with the technology, or concerned about its accuracy and safety.
Most Common ChatGPT Advice Categories
This dataset from the study highlights the types of advice U.S. adults sought from ChatGPT, including educational, financial, and medical topics, based on 2025 survey results.
Most Common ChatGPT Advice Categories | |
Type of ChatGPT Advice Used | % of ChatGPT Users |
📘 Educational | 50.0% |
💰 Financial | 33.3% |
🛍️ Product Recommendation | 30.0% |
🗞️ News / Current Events | 26.7% |
🏥 Medical | 23.3% |
💼 Career | 20.0% |
🧠 Mental Health | 18.3% |
💞 Relationship Advice | 15.0% |
⚖️ Legal | 13.3% |
As the table shows:
- Educational help (50%) was the top use case — highlighting how AI is being used as a learning tool.
- Financial advice (33.3%) and product recommendations (30%) were also popular, reflecting the growing role of AI in daily decision-making.
- More sensitive topics — like medical (23.3%) and legal advice (13.3%) — were used less often, likely due to lower levels of trust.
It is noteworthy that 70 percent of users felt that ChatGPT was useful–it led to a good result–while 10% found it to be harmful, leading to a bad result.
One takeaway is that this data reveals a clear trend: Americans are willing to consult AI for important life choices, but they’re still cautious in areas where incorrect advice could have serious consequences. The results also show that ChatGPT is not just being used for trivia or writing help — users are turning to it for real advice on real-life matters.
Ethical Risks
Ethical risks include a lack of transparency, erosion of privacy, poor accountability and workforce displacement and transitions. The existence of such risks affects whether AI systems should be trusted. To build trust through transparency, organizations should clearly explain what data they collect, how it is used and how the results affect customers.
Data security and privacy are important issues to consider in deciding whether to use ChatGPT, especially in the workplace. As an AI system, ChatGPT has access to vast amounts of data, including sensitive financial information. There is a risk that this data could be compromised. It is important that essential security measures are in place to protect this data from unauthorized access.
Pittelkow points out that:
“While ChatGPT can provide helpful suggestions, it is not as good at decision-making or personalizing scripts based on personality or organizational culture. An effective way to use ChatGPT and similar AI programs is to ensure a human or group of humans is reviewing the data, testing it, and implementing the results in a way that makes sense for the organization using it. For example, with job descriptions written by an AI program, at least one human should ensure the details make sense with what the organization does and does not do.”
One way that ChatGPT is working on preventing the release of inappropriate content is by asking humans to flag content for it to ban. Of course, this method brings up a number of ethical considerations. Utilitarians would argue that this method is ethical because the ends justify the means—the masses are not subject to bad content because only a few people are. The value of processing large amounts of data and responding with answers can simplify workplace processes, but the possible displacement of workers needs to be considered.
In terms of preventing unethical behaviors, such as users asking the program to write their papers to pass off as their own, some technology developers are creating AI to specifically combat nefarious usage with AI. One such technology is ZeroGPT, which can help people determine if content is generated from a human or from AI.
Conclusions
The ethical use of AI should be addressed by all organizations to build trust in the system and satisfy the needs of stakeholders for accurate and reliable information. A better understanding of machine learning would go a long way to achieve this result.
Professional judgment is still necessary in AI to decide on the value of the information produced by the system. Unless the data is reliably provided and processed, AI will produce results that are inaccurate, incomplete or incoherent, and machine learning would be compromised with respect.
Posted by Dr. Steven Mintz, aka Ethics Sage, on July 7, 2025. Learn more about his activities at: https://www.stevenmintzethics.com/ and signing up for the newsletter.