The rise of LLMs and generative AI has caused a dramatic increase in the energy consumption of data centers, a problem that will continue to grow as AI becomes more ubiquitous. Our group studies the use of photonics as an enabler for next-generation AI accelerators that can be orders of magnitude faster and more efficient than electronic processors, leveraging the bandwidth, latency, and low-loss interconnection advantages of optically encoded signals. I will discuss our work addressing the main challenges of photonic computing, including (i) scalability, where we are developing time-multiplexed and free-space optical systems to overcome area bottlenecks, (ii) noise and imperfections, where we have developed new hardware error correction algorithms for photonics, (iii) the use of delocalized computing to overcome von Neumann bottlenecks (with additional applications in quantum-secure computation), and (iv) training, where we have demonstrated a forward-only training algorithm for photonic neural networks.