Message from the interim Dean
August 14, 2025
Scaling up while scaling down costs
The traditional way to program a computer is with detailed instructions for completing a task. Say you wanted software that can spot irregularities on a CT scan. A programmer would have to write step-by-step protocols for countless potential scenarios.
Instead, a machine learning model trains itself. A human programmer supplies relevant data—text, numbers, photos, transactions, medical images—and lets the model find patterns or make predictions on its own.
Throughout the process, a human can tweak the parameters to get more accurate results without knowing how the model uses the data input to deliver the output.
Machine learning is energy intensive and wildly expensive. To maximize profits, industry trains models on smaller datasets before scaling them up to real-world scenarios with much larger volumes of data.
“We want to be able to predict how much better the model will do at scale. If you double the size of the model or double the size of the dataset, does the model become two times better? Four times better?” said Zhang.
To return to the August COS Faculty & Staff newsletter homepage, click here.
Read the full story by Lisa Potter in @ TheU