| Register
\(\newcommand{\Cat}{{\rm Cat}} \) \(\newcommand{\A}{\mathcal A} \) \(\newcommand{\freestar}{ \framebox[7pt]{$\star$} }\)

5. RBM Optimization

    1. What are the critical points of the EM algorithm for $RBM_{3,2}$?

      Problem 5.1.

      What are the critical points of the EM algorithm for $RBM_{3,2}$? Can the MLE degree for the strata of $RBM_{3,2}$ be determined?
          The EM fixed points may be known for ${\cal M}_{3,3},$ but since these points depend on the specific parameterization of the model, the fixed points of $RBM_{3,2}$ remain unknown.
        • Characterizing the optimization landscape for RBMs

          Problem 5.2.

          How do we characterize local optima on the optimization landscape for RBMs as a function of the architecture and number of empirical samples?
            • Is using dropout during RBM training equivalent to adding a regularizing penalty term on to the training objective function?

              Problem 5.3.

              [Eric Auld] Is using dropout during RBM training equivalent to adding a regularizing penalty term on to the training objective function?
                  This question may have been partially motivated by the observation that alternative regularization techniques for training feedforward neural networks have been shown equivalent to adding a regularizer, or penalty, term to the objective function. The instance that readily comes to mind is that the technique of early stopping is equivalent to adding an $L_2$ penalty term on the size of the network weights.

                  Cite this as: AimPL: Boltzmann Machines, available at http://aimpl.org/boltzmann.