From education to employment

My Computer Tells Me My Teachers Were Right All Along

I’ve spent most of my life at the intersection of computers and learning, first learning how to teach computers, then teaching computers how to learn. A CS Masters from Stanford, intern at Uber, a Machine Learning Scientist at Tesla, exciting and amazing opportunities to learn about AI.

Today, I’m a co-founder and CTO at Pangram Labs, where we’re building AI systems that can detect text that’s been created by AI. More specifically, we’re teaching software how to learn how to spot AI. We think the work we’re doing is important because, in a world that’s becoming saturated with AI, being able to tell real and authentic from robotic and artificial will be increasingly essential.

What I’ve Learned About Learning Itself

As we’ve built Pangram, and through my work on other self-learning AI projects, I’ve learned something that I think is pretty insightful about learning itself. Or, more accurately, how we deploy what we learn and move from information to informed action.

Recently, many people believed, some still believe that specialised, single-expertise learning is the best. The example from education may be that if you’re going to go into computer science, all those required literature and music courses are pointless. Similarly, if you want to study medicine, you’re best off learning all you can about medicine while shuffling off everything else.

Early on, that approach made sense to computer people too. And so, even though early AI systems were usually started with basic and broad training data, there has been a strong desire to load them with area-specific expertise, rules, and heuristics. If we wanted our LLM to help chemists, for example, we trained it on heavy chemistry; there was little point in feeding it too much Shakespeare or Mamet.

Turns out, that approach has significant limits. 

Quoting the brilliant work of Rich Sutton, who won the 2025 Turing Award, essentially the Nobel Prize for computer science, “The biggest lesson that can be read from 70 years of AI research is that general methods that leverage computation are ultimately the most effective, and by a large margin.” The important word there is “general.” 

We’re Seeing the Same Thing

At Pangram, we’re seeing the same thing, that general, broad training is best for AI, and for finding AI. 

In our work, about 99.9% of the time, our machine learning model, which is trained with a broad dataset, can separate what is AI-speak from what is genuine human writing. By having seen music, pop culture, literature, academic writing, art, and romance, our tools get really, really good at whatever they are asked to do, no matter how nuanced. By contrast, a model that is specific to a certain type of writing may be 99% accurate most of the time, but it fails miserably on the harder stuff. When telling truth from fiction, that last point-nine percent matters. In fact, broad training for our detection tools are turning out to be so effective that we’re showing we can even spot AI models that have not even been publicly released yet. 

It Sounds Like a Liberal Education, Because It Is

This is not a purely CS realisation. If the pattern of gaining more power from diverse training sounds familiar, it should. It sounds quite a bit like a traditional, liberal higher education where studying a variety of seemingly unconnected subjects is encouraged, regardless of career track or passion.

Educators used to refer to this kind of training as learning to learn. A college education was supposed to start you along a major by instilling a broad range of knowledge and experience to draw from once landing in your career. It was supposed to teach you to know lots of general things and know how to figure out the rest.

Turns out, based on what we have been learning from machines, academics were right. If we can infer from what machines are telling us about how they accept, store, process, and return information, how they learn, how they act based on what they know, there is value in knowing a thing or two very well. But when you get to the end of where that knowledge can take you, that broad knowledge base to see differently is a kind of cognitive superpower.

And it’s fascinating to (re)learn that lesson from the very machines we’re now teaching to learn.

By Bradley Emi, Co-Founder and CTO at Pangram Labs

Bradly Emi hails from Stanford. He and his friend and fellow Stanford alum, Max Spero, founded their first AI company in 2023.


Related Articles

Responses