Unlocking the Power of Convex-Constrained Machine Learning

Unlocking the Power of Convex-Constrained Machine Learning

Hey there, fellow machine learning enthusiasts! I’m on the hunt for some exciting convex-constrained ML problems to put a new optimization algorithm to the test. Specifically, I’m looking for use cases that involve optimizing a cost function over a convex set of constraints, which can be tackled by gradient-based and projection-free algorithms like Frank Wolfe (FW) algorithms.

You might be wondering, why FW algorithms? Well, they offer a significant advantage over traditional Projected Gradient Descent (PGD) methods, which can be computationally expensive due to the projection step. FW algorithms, on the other hand, avoid this step, making them more efficient and scalable.

So, I’ve been digging around for problems that fit this description, and I’ve come up with a few promising candidates:

* Adversarial attacks: modifying images in an imperceptible way to mislead classifiers, with the modification constrained to an ε-ball to keep it small and realistic.

* Polynomial regression and compressed sensing: seeking sparse representations by constraining coefficients to live in an L1-norm ball.

* Matrix completion: reformulating the problem to constrain the nuclear-norm value of the matrix, which is a convex constraint.

I’m also interested in optimization over the set of Doubly Stochastic Matrices (or Birkhoff polytope), but I haven’t found any concrete applications yet. If you have any ideas, I’d love to hear them!

These problems are just the beginning, and I’m excited to explore more convex-constrained ML use cases. If you have any suggestions or insights to share, please do!

Leave a Comment

Your email address will not be published. Required fields are marked *