Regularization tries to stop a model from fitting the training data too literally. In practice, it often means accepting a bit more training loss in exchange for better performance on unseen examples.
This idea shows up in posts on XGBoost, KANs, and reinforcement learning. Strong training metrics are not the real goal if the model becomes brittle, unstable, or overly specialized to the data it already saw.
