Lyle CS Paper Named Highlighted Article in IEEE ComputingEdge
CS Professors Xihao Xie and Jia Zhang were honored in the February edition of IEEE ComputingEdge for revolutionary advancements in responsible, human-centered artificial intelligence.
Dr. Xihao Xie, Clinical Assistant Professor with the Lyle Department of Computer Science, and Dr. Jia Zhang, Inaugural Robert H. Dedman Jr. Endowed Department Chair, were recently honored by the Institute of Electrical and Electronics Engineers (IEEE) for their novel approach to AI training — one that aims to make artificial intelligence more responsible and trustworthy, modeling a system of critical thinking and bias recognition much like our own.
Their paper, entitled "Beyond Pattern Recognition: Teaching AI to Think Critically Before It Learns," was originally published in the November edition of IEEE Computer and selected as a highlighted article in the February edition of IEEE ComputingEdge. In collaboration with Dr. Jeffrey Voas of the National Institute of Standards and Technology (NIST), their work introduces a novel "critical learning" paradigm that enables AI systems to evaluate the reliability of training data before learning, moving beyond passive pattern recognition toward principled reasoning and verification.
Generative artificial intelligence — known to many as online platforms like ChatGPT, Gemini, Claude, and Grok — represents advanced large language model technology with the power to generate data, from text to videos, code, and images with user prompts. Powered by a massive and growing repertoire of human data, AI is trained to recognize patterns and use them to create unique outputs.
According to Lyle researchers, the very same data sets and training procedures that have produced among the most tremendous technological leaps of our time are actually the technology’s “Achilles heel.”
“Humans do not simply absorb information — we question it, evaluate its reliability, and seek the truth,” Dr. Xie explained. “Critical thinking and guided learning are fundamental components of human intelligence, yet largely absent from AI systems. Our research explores the idea of critical learning — where AI systems evaluate the reliability of the data they learn, rather than accept it indiscriminately.”
“Modern AI systems are undeniably powerful, but is that all intelligence should be?”
Their work explains that, without safeguards, generative artificial intelligence ingests and uses data without exception, adding information to its growing repository without responsible or factual safeguards. When AI is trained with unreliable or biased data, those biases then inform its output. As artificial intelligence increases in capability and applicability, the problem of how its training data is used and acquired is among the most pressing concerns across all disciplines as artificial intelligence begins to take on a larger role in our society.
“As AI systems are increasingly used in high-impact domains such as healthcare, finance, governance, and public information systems, the consequences of unreliable training data become much more serious,” Dr. Xie emphasizes.
To combat this, their work centers on training AI to recognize what’s right and what’s wrong, establishing a framework of checks and balances that assigns reliability metrics and factual safeguards to training datasets. With their insights, AI evolves from a platform that sees data as a pair — an input and an output — to a triplet model that includes a third player, the reliability score. This seemingly subtle change has the power to reshape how we build and train AI systems, and for the better.
To them, the honor of having their work recognized by the IEEE is both humbling and exciting: “This recognition strengthens our commitment to advancing this line of work and continuing to investigate how AI systems can become more reliable, trustworthy, and human-centered.”
“By encouraging AI systems to question and evaluate the information they learn from, we hope to reduce vulnerabilities to external threats introduced from training data,” Dr. Xie explained. “Ultimately, our goal is to help build AI technologies that strengthen trust in digital systems and support healthier information ecosystems, both in the cyber world and in the broader society these systems serve.”
Please join us in congratulating Dr. Xie and Dr. Zhang on this incredible honor, which reflects their leadership in responsible AI research and Lyle’s commitment to harnessing next-generation intelligent systems to build a better world.
About the Bobby B. Lyle School of Engineering
SMU’s Lyle School of Engineering thrives on innovation that transcends traditional boundaries. We strongly believe in the power of externally funded, industry-supported research to drive progress and provide exceptional students with valuable industry insights. Our mission is to lead the way in digital transformation within engineering education, all while ensuring that every student graduates as a confident leader. Founded in 1925, SMU Lyle is one of the oldest engineering schools in the Southwest, offering undergraduate and graduate programs, including master’s and doctoral degrees.
About SMU
SMU is a nationally ranked global research university in the dynamic city of Dallas. SMU’s alumni, faculty, and nearly 12,000 students in eight degree-granting schools demonstrate an entrepreneurial spirit as they lead change in their professions, community, and the world.
SMU is the nationally ranked teaching and research university in the dynamic city of Dallas, and a member of the prestigious Atlantic Coast Conference. SMU’s alumni, faculty and more than 12,000 students in eight degree-granting schools demonstrate an entrepreneurial spirit as they lead change in their professions, communities and the world.