Suppose I have a function $f:\mathbb{R}\rightarrow \mathbb{R}$ defined as the following parametric optimization problem: $$f(p) = \inf_xf_0(x) \quad \text{subject to } \quad G(x,p)\leq 0,$$ where objective function $f_0: \mathbb{R}^n \rightarrow \mathbb{R}$ is linear and $G(x,p)\leq 0$ is a matrix inequality. The matrix inequality has the following bilinear form in decision variable $x$ and parameter $p$: $$G(x,p) = G_0 + G_1(x) + p\cdot G_2(x), $$ where $G_0 \in \mathbb{S}^{n}, G_1,G_2:\mathbb{R}^n\rightarrow\mathbb{S}^n$ being linear mapping, and $\mathbb{S}^n$ is the space of $n\times n$ symmetric matrices. For any given value of $p$, the constrained optimization problem is a semidefinite problem. Hence, $f(p)$ can be solved via a standard SDP solver.
Given the function $f(p)$, I would like to minimize $f(p)$ over $p$. I knew I could either form a nonlinear optimization problem that jointly solves $x,p$ at the same time. But I prefer to use either zero-order optimization over $p$ and convex optimization to solve $f(p)$ for some algorithm design choose to preserve physical meaning and computational cost. If the function $f(p)$ is convex in $p$, we can justify the use of a zero-order optimization algorithm.
I've done some literature review and found some results on Lipschitz continuity of the optimal value function for parametric nonlinear programming. However, I couldn't find any results specifically for convexity of the optimal value function for parametric (convex) programming. Appreciate if you can share some approaches and resources regarding this issue.