Explore challenging problems in robotics through this lecture from the Theory of Reinforcement Learning Boot Camp. Delve into topics such as backflips, models, experiments, perception, and Q functions. Examine deep models, data vs physics approaches, state space, and use cases for online optimization and state estimation. Investigate physics models, Lagrangian state parameters, gradient-based policy search, and static output feedback. Learn about over-parameterization, Eula parameters, and lessons from roboticists. Analyze the 2D Hopper, rare event simulation, traditional approaches, and failure cases. Discover insights on hopping robots, manipulation, planar grippers, robot simulators, occupation measures, convergence, and the relationship between language and state in linear models.