Recently, there has been a significant increase in research aimed at understanding the theoretical foundations of neural networks defined in infinite-dimensional function spaces. While existing studies have explored various aspects of this topic, our understanding of the approximation and learning abilities of these networks remains limited. In this talk, I will present our recent work on the generalization analysis of deep functional networks designed for learning nonlinear mappings from function spaces to R (i.e., functionals). By investigating the convergence rates of approximation and generalization errors, we uncover important insights into the theoretical properties of these networks. This analysis not only deepens our understanding of deep functional networks but also paves the way for their effective application in areas such as operator learning, functional data analysis, and scientific machine learning.