Basu, the Carr P. Collins Chair in Management Information Systems at SMU Cox School of Business, coaches his students to think of AI tools as fluid, not static. As an AI gathers new data, it simply isn’t the same product that arrived out of the box.
“The systems are dynamic, unlike traditional information systems and applications,” says Basu, who also chairs the Information Technology and Operations Management Department. “Even after they’re deployed, they’re changing—which means that the system that was installed six months or a year ago in your company is not the same as the system that just screwed up.
“That creates a real dilemma. If you’re a leader in an organization, and the success of your organization, your reputation, and the welfare of your customers and partners all relies on the system, you have to think about how you go about this in a way that is responsible.”
Like many other technologies, Basu says, AI is transforming and disrupting the world. However, a unique facet of this technological transition, he notices, is how much the move to adopt AI has been led by everyday consumers. As just one benchmark: In January 2023, ChatGPT reached 100 million monthly users faster than any other application in history, just two months after its release.
That breakneck speed of adoption immediately led to demand from potential students and their parents to incorporate AI into the business curriculum. Today, classes at the Cox School are led by some of the top AI thinkers in their respective fields, who strive to balance the principles comprising a business education against the tremendous upside—not to mention sheer enthusiasm—of the suite of applications that have hit the market.
“Ordinary people are using AI before technicians and professionals are using AI,” Basu says. “If you think about it, how many technologies have we had like that? Where laypeople are playing with the technology before the professionals?”
The speed at which people and companies are adopting AI tools complicates many of the concerns about their effective use in business. In addition to the challenges of keeping up with the rapid pace of AI technology innovation, business leaders will have to also deal with key questions about ethics and transparency in AI use.
The “black box” nature of many AI systems is a key source of complexity. Ethically, how should leaders apply AI solutions to problems if those solutions could contain biases against underrepresented people? And how can you assure your employees that you’re making decisions in a transparent manner if you’re relying on AI tools that continually evolve in ways that you can’t readily explain?
“If you look at AI like a black box, it seems capable of doing amazing things,” Basu says. “And then sometimes when it doesn’t—what happened? That’s why we believe in training students to understand how it works, so that it can be a valuable tool, even if it is imperfect.”
The AI surge meets reality
Even if they choose not to concentrate in AI studies, students emerging from today’s universities and business schools are entering an economy that will expect them to navigate AI tools and platforms. The bets that American companies have made on AI are positively staggering. In November 2025, Goldman Sachs released a report estimating that in the previous three years—the ChatGPT era, basically—companies involved in AI tech have risen in value by $19 trillion. For a point of comparison, the gross domestic product of the United States is about $30 trillion.The investments and valuations have outpaced the available workforce, leading to huge rewards for young AI workers. The Wall Street Journal reported in August 2025 that from 2024 to 2025, the base salaries for nonmanagers in AI-related jobs with less than three years’ experience leapt by 12%. Also, last summer, PwC found that workers with AI skills commanded a 56% premium in their wages.
Yet while companies are rewarding young AI experts, they’re also trying to use AI tools to automate tasks usually given to recent graduates. Between January 2023 and June 2025, job postings for entry-level workers plunged over 35%, per a study by the labor research firm Revelio Labs. The adoption of AI tools doesn’t explain the entire gap, but it’s a huge factor—one that is creating further incentives for students to lean into the technology.
Those huge bets by American companies and the ubiquity of the tools can lead to a false sense of inevitability, however. Venkatesh “Venky” Shankar, the Harold M. Brierley Endowed Professor of Marketing and Chair of the Cox Marketing Department, reminds his students that AI is far from infallible or even authoritative. He incorporates class exercises meant to illustrate the tools’ stochastic calculus—in short, their ability to account for randomness, a trait that, even using flawless data, will nonetheless produce unpredictable outputs.
Shankar recalls one session of his class on AI applications and marketing. He gave the students identical data sets and watched as the exercise produced different results for different students. When this happens, the professor says, the next step is to review the answers as a class and “debug” the results.
They found that the tools made different assumptions for different students. Without that insight into the process, the students would be at the mercy of whatever responses their applications spit out.
“These generative AI tools themselves are some sort of a puzzle to even their creators,” Shankar says. “They take all the previous data available and the reservoir of information on the web. Then they get trained to predict the next word if you’re asking for a textual query or next image or next pixel.”