
I believe the difference between humans and computers is the ability to think critically and solve problems creatively. However, I have colleagues who believe AI is capable of both critical thought and creative thinking. I am not convinced, so I decided to find out by using AI in my classroom and business.
Many of us (including me) avoid the use of artificial intelligence in the classroom and view it as cheating. At the recent AAC&U (American Association of Colleges & Universities) conference, Brian Alexander, a futurist and scholar at Georgetown University, shared a comment that really stuck with me, “What is considered cheating in a classroom is considered progress in a business”.
I decided to use some critical thinking of my own to explore the relationship between AI and critical thinking. I started by exploring the AAC&U Critical Thinking VALUE Rubric which evaluates and discusses student learning at an institutional level. Here are the key elements:
1. Explanation of Issues: The ability to identify and summarize the main problem or question with clarity and depth.
2. Evidence: The selection and use of appropriate and credible evidence to support arguments and conclusions.
3. Influence of Context and Assumptions: The ability to identify and consider the influence of context, assumptions, and biases on the issue.
4. Student’s Position: The development of a coherent and logical position that is well-supported by evidence and reasoning.
5. Conclusions and Related Outcomes: The ability to arrive at conclusions that are logically tied to the analysis and evidence presented, considering the implications and consequences.
My hypothesis is that AI can digest and summarize large amounts of data in seconds. But it cannot create new ideas or understand how people feel (empathize). Getting people to lean into new ideas or processes is the hardest part of the work we do. That requires empathy to understand their point of view. To test this hypothesis using the Critical Thinking VALUE Rubric, I asked Copilot several questions to see if I agree with its responses.
- Explanation of Issues: As I assumed, an AI tool like Copilot or ChatGPT can take a prompt and summarize all the key issues with clarity and depth. Prompt: “What are the key issues plaguing higher education and causing financial instability?” Its response was comprehensive, but it did not mention increased discount rates, which for many institutions is a major reason for their deficits.
- Evidence: I asked it for evidence to support its issues and saw that it provided references to a few articles. I would not have chosen these sources as “evidence”. While I do not disagree with the conclusions, there was very little evidence to support them. So, I asked it a follow up question: “Is there more research to support conclusions for the industry as a whole?” It then pulled in a single article as the only additional evidence. There were no refences on the demographic cliff sited from the US Census Bureau. It also relies heavily on articles, which can be false or based on opinion.
- Influence of Context and Assumptions: I asked Copilot, “Why is the public questioning the value of higher education?” Again, it regurgitated the same information and added a couple of sources. It also mentioned that “employers are looking for practical skills and experience that traditional college programs may not provide.” I decided to ask a follow up question. “What kind of practical training are employers looking for?” It provided a list of all the usual suspects around technical skills, job training, leadership, etc. So, I asked, “What can colleges do to meet these demands?” I liked the suggestions, though none of them are new or creative. I needed to understand if it knew it was biased based on the limited information it was pulling so I asked, “Are your responses biased in any way?” It politely let me know that “I strive to provide accurate, balanced, and unbiased information based on the data and sources available.”
When I asked it to provide some views to the contrary related to higher education, it cited some studies that show lower income, or first-generation portions of the population see significant value in higher education.
Armed with this insight, I asked, “What action should higher education take to fix the issues plaguing it and ultimately lead to financial stability?” It provided me with a list of all the usual suspects including revenue diversification through online programs and phasing out courses with low enrollment. What I really want to know is can it understand the implications and consequences of its recommendations?
“If you could build a college with high demand high outcomes and low cost, what would it look like?” Again, it gave me a lot of the same information we saw in other articles. Remember, AI learns (much like us), so the more questions you ask, the better it will understand the topic and the more comprehensive its answer will be. I decided to just go for it and ask a deeper question, “Can you create a business plan for a new college that meets these criteria, focuses on teaching finance and accounting, and has strong employer relationships to guarantee jobs after graduation?” It suggested stackable credentials as in our earlier conversation, yet it proposed four degree programs and a focus on CPA and CFA industry certifications.
So, the evidence agreed with my hypothesis. There were no new ideas and no ability to understand context or bias.
AI did suggest questions I did not think of asking, however. And these questions led to a few more questions of my own, like “How can we attract industry partners effectively and how do we measure partnership success?”
When I asked Copilot if AI could think critically, this is what it told me.
AI can simulate critical thinking to some extent by analyzing data, identifying patterns, and making decisions based on programmed algorithms. However, it’s important to note that AI doesn’t “think” in the way humans do. Instead, it processes information according to predefined rules and learns from data to improve its performance.
Critical thinking in humans involves a deep understanding, intuition, and the ability to consider abstract concepts, which AI lacks. AI can assist in critical thinking tasks by providing insights and recommendations, but it doesn’t possess true consciousness or self-awareness.
I asked Copilot whether AI is capable of creative thinking.
AI can exhibit forms of creative thinking by generating new ideas, art, music, and even writing stories. It uses techniques like machine learning, neural networks, and generative models. For example, AI can create paintings in the style of famous artists or compose music that mimics certain genres. In fact, Copilot created the image I’m using in this article.
However, AI’s creativity is different from human creativity. It relies on patterns and data it has been trained on, rather than having original thoughts or emotions. AI can combine existing elements in novel ways, but it doesn’t have the subjective experience or intuition that humans do.
How does this relate to higher education?
Technology and artificial intelligence are challenging our critical thinking skills and making them more important than ever. I recall someone telling me “A map makes us smarter; a GPS makes us dumber”. By relying solely on the GPS and traveling where it tells us to go, we may be more efficient, but we fail to learn the directions without prompting.
We run the risk that this will occur in all aspects of our lives if we do not train our students on how to use AI in a way that enhances thoughts without replacing them. If we are to teach our students (and our children) how to succeed in an AI-driven world we must understand technology and artificial intelligence and understand its limitations. We need to think critically about our issues and solve them creatively. How do we build these skills into every aspect of our teaching and learning?
Much of the content we get from traditional media sources has some bias. After all, reporters and journalists are people and can be subjective, even unconsciously. We tend to digest what we hear on social media or the news, and assume it is factual, unbiased evidence. We can also ignore the context or biases in the information if they align with our own beliefs.
This takes me back to my husband’s law school example. “If you want to win an argument, you need to argue for the other side”. If we used this concept in today’s environment, we would understand better the implications and complexities of situations.
AI can now do many entry level jobs, so we need to teach our students to be smarter than AI. Many critics and employers tell us that our students graduate without the skills necessary to perform today’s jobs. That statement is less about learning the core curriculum and more about the ability to think critically and solve problems creatively. For example, a computer can now enter invoices or select items for an audit, but it still takes an accountant to ask critical questions about the numbers themselves. Just because the vendor is from Ottawa, do I have to pay the invoice in Canadian currency?
Tips for the use of AI:
- Focus on asking better questions. Ask if this makes sense (critical thinking). For example, an AI tool telling me to increase discount rate to increase enrollment does not mean I should. I need to dig deeper to understand if this will cause my net tuition to decline.
- Use AI to give you more ideas to help with creativity but not replace it.
- Ask AI to evaluate a set of data, like your stock portfolio, knowing that it can only predict the future by using historical information. AI does not understand the modern context of situations (like geopolitics).
- Use it to create a draft versus editing a draft to provide a strong starting point.
- AI takes on a persona, so tell it what lens to look through and who it is speaking to.
- Tell AI to ask you clarifying questions. This will ensure it understands the exercise.
Above all else remember AI tells you what it thinks you want to hear. When you use the data it provides, you are responsible for its accuracy.