One of the biggest struggle with ML research is the creation of objective functions which capture the researcher's goals. Especially when talking about generalizable AI, the definition of the objective function is very tricky.
This excellent paper for instance attempts to define an objective function to reward an agent's curiosity.
If we could measure intelligent behavior well, it would perhaps be possible to perform an optimization in which the parameters of a simulation such as a cellular automaton are optimized to maximize the emergence of increasingly intelligent behavior.
I vaguely remember having come across a group of cross-discipline researchers who were attempting to use the information theory concept of entropy to measure intelligent behavior but cannot find any resources about it now. So is there a scientific field dedicated to the quantification of intelligent behavior?
The field is called Integrated Information Theory, initially proposed by Giulio Tononi. It attempts to quantify consciousness of systems by formally defining formally the phenomenological experience of consciousness, and computing a value Phi, meant for a proxy of "consciousness".