I remember studying “Probably Approximately Correct” learning and such, and it was a pretty cool way of building axioms, theorems, and proofs to bound and reason about ML models. To my knowledge, there isn’t really anything like it for large networks; maybe someday.
I remember studying “Probably Approximately Correct” learning and such, and it was a pretty cool way of building axioms, theorems, and proofs to bound and reason about ML models. To my knowledge, there isn’t really anything like it for large networks; maybe someday.