Skip to main content
Log in

Liquidity management with decreasing returns to scale and secured credit line

  • Published:
Finance and Stochastics Aims and scope Submit manuscript

Abstract

This paper examines the dividend and investment policies of a cash constrained firm, assuming a decreasing-returns-to-scale technology and adjustment costs. We extend the literature by allowing the firm to draw on a secured credit line both to hedge against cash-flow shortfalls and to invest/disinvest in a productive asset. We formulate this problem as a two-dimensional singular control problem and use both a viscosity solution approach and a verification technique to get qualitative properties of the value function. We further solve quasi-explicitly the control problem in two special cases.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

Notes

  1. The spread may be justified by the cost of equity capital for the bank. Indeed, the full commitment to supply liquidity up to the firm’s credit limit prevents bank’s shareholders to allocate part of their equity capital to more valuable investment opportunities.

  2. Ly Vath et al. [22] have also studied a reversible investment problem in two alternative technologies for a cash-constrained firm that has no access to external funding.

  3. The extension to the case of variable size will be studied in Sect. 4.

  4. Under a credit line agreement, banks must block part of their funds to provide liquidity. This prevents banks from seizing new opportunities and this especially as the demand for funds is high. When the banking system is not competitive, banks charge an increasing credit line spread to offset the opportunity costs.

  5. This assumption is standard in models with cash. It captures in a simple way the agency costs; see [8, 18] for more details.

References

  1. Arfken, G.B., Weber, H.J.: Mathematical Methods for Physicists, 6th edn. Elsevier Academic, Amsterdam (2005)

    MATH  Google Scholar 

  2. Asmussen, A., Højgaard, B., Taksar, M.: Optimal risk control and dividend distribution policies. Example of excess-of loss reinsurance for an insurance corporation. Finance Stoch. 4, 299–324 (2000)

    Article  MathSciNet  MATH  Google Scholar 

  3. Beneš, V.E., Shepp, L.A., Witsenhausen, H.S.: Some solvable stochastic control problems. Stochastics 4, 38–83 (1980)

    MathSciNet  MATH  Google Scholar 

  4. Black, F., Cox, J.: Valuing corporate securities: some effects of bond indenture provisions. J. Finance 31, 351–367 (1976)

    Article  Google Scholar 

  5. Bolton, P., Chen, H., Wang, N.: A unified theory of Tobin’s \(q\), corporate investment, financing, and risk management. J. Finance 66, 1545–1578 (2011)

    Article  Google Scholar 

  6. Choulli, T., Taksar, M., Zhou, X.Y.: A diffusion model for optimal dividend distribution for a company with constraints on risk control. SIAM J. Control Optim. 41, 1946–1979 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  7. Crandall, M.G., Ishii, H., Lions, P.L.: User’s guide to viscosity solutions of second order partial differential equations. Bull. Am. Soc. 27, 1–67 (1992)

    Article  MathSciNet  MATH  Google Scholar 

  8. Décamps, J.P., Mariotti, T., Rochet, J.C., Villeneuve, S.: Free cash-flows, issuance costs, and stock prices. J. Finance 66, 1501–1544 (2011)

    Article  Google Scholar 

  9. Della Sera, M., Morellec, E., Zucchi, F.: Rollver traps. Working paper EPFL (2016). Available online at http://sfi.epfl.ch/cms/lang/en/pid/135258 or http://sfi.epfl.ch/files/content/sites/sfi/files/users/185422/public/rollover_cash_CURRENT.pdf

  10. Diamond, D.: Financial intermediation and delegated monitoring. Rev. Econ. Stud. 51, 393–414 (1984)

    Article  MathSciNet  MATH  Google Scholar 

  11. Federico, S., Pham, H.: Characterization of the optimal boundaries in reversible investment problems. SIAM J. Control Optim. 52, 2180–2223 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  12. Forsyth, P., Laban, G.: Numerical methods for controlled Hamilton–Jacobi–Bellman PDEs in finance. J. Comput. Finance 11, 1–44 (2007)

    Article  Google Scholar 

  13. Haussmann, U.G., Suo, W.: Singular optimal stochastic controls I: existence. SIAM J. Control Optim. 33, 916–936 (1995)

    Article  MathSciNet  MATH  Google Scholar 

  14. Haussmann, U.G., Suo, W.: Singular optimal stochastic controls II: dynamic programming. SIAM J. Control Optim. 33, 937–959 (1995)

    Article  MathSciNet  MATH  Google Scholar 

  15. Hojgaard, B., Taksar, M.: Controlling risk exposure and dividends pay-out schemes: insurance company example. Math. Finance 9, 153–182 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  16. Holmström, B., Tirole, J.: Private and public supply of liquidity. J. Polit. Econ. 106, 1–40 (1998)

    Article  Google Scholar 

  17. Hugonnier, J., Morellec, E.: Bank capital, liquid reserves and insolvency risk. Working paper EPFL (2016). Available online at http://sfi.epfl.ch/cms/lang/en/pid/135258 or http://sfi.epfl.ch/files/content/sites/sfi/files/users/185422/public/banking.pdf

  18. Hugonnier, J., Malamud, S., Morellec, E.: Capital supply uncertainty, cash holdings and investment. Rev. Financ. Stud. 28, 391–445 (2015)

    Article  MATH  Google Scholar 

  19. Jeanblanc-Picqué, M., Shiryaev, A.N.: Optimization of the flow of dividends. Russ. Math. Surv. 50, 257–277 (1995)

    Article  MathSciNet  MATH  Google Scholar 

  20. Kashyap, A., Rajan, R., Stein, J.: Banks as liquidity providers: an explanation for the co-existence of lending and deposit-taking. J. Finance 57, 33–73 (2002)

    Article  Google Scholar 

  21. Leland, H.E.: Corporate debt value, bond covenants, and optimal capital structure. J. Finance 49, 1213–1252 (1994)

    Article  Google Scholar 

  22. Ly Vath, V., Pham, H., Villeneuve, S.: A mixed singular/switching control problem for a dividend policy with reversible technology investment. Ann. Appl. Probab. 18, 1164–1200 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  23. Manso, G., Strulovici, B., Tchistyi, A.: Performance-sensitive debt. Rev. Financ. Stud. 23, 1819–1854 (2010)

    Article  Google Scholar 

  24. Miller, M.H., Modigliani, F.: Dividend policy, growth and the valuation of shares. J. Bus. 34, 411–433 (1961)

    Article  Google Scholar 

  25. Paulsen, J.: Optimal dividend payouts for diffusions with solvency constraints. Finance Stoch. 7, 457–474 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  26. Pham, H.: Continuous-Time Stochastic Control and Optimization with Financial Applications. Springer, Berlin (2009)

    Book  MATH  Google Scholar 

  27. Radner, R., Shepp, L.: Risk versus profit potential: a model for corporate strategy. J. Econ. Dyn. Control 20, 1373–1393 (1996)

    Article  MATH  Google Scholar 

  28. Revuz, D., Yor, M.: Continuous Martingales and Brownian Motion, 3rd edn. Springer, Berlin (1999)

    Book  MATH  Google Scholar 

  29. Sufi, A.: Bank lines of credit in corporate finance. Rev. Financ. Stud. 22, 1057–1088 (2009)

    Article  Google Scholar 

Download references

Acknowledgements

The authors gratefully acknowledge the financial support of the research initiative IDEI-SCOR “Risk Market and Creation Value” under the aegis of the risk foundation and the chair Finance and Sustainable Development (IEF sponsored by EDF and CA).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Stéphane Villeneuve.

Appendix

Appendix

1.1 6.1 Proof of Theorem 5.4

1. Supersolution property. Let \((\bar{x},\bar{k}) \in S\) and \(\varphi\in C^{2}(\mathbb{R}_{+}^{2})\) be such that \((\bar{x},\bar{k})\) is a minimum of \(V^{*}-\varphi\) in a neighbourhood \(B_{\varepsilon}(\bar {x},\bar{k})\) of \((\bar{x},\bar{k})\) with \(\varepsilon\) small enough to ensure \(B_{\varepsilon}\subset S\) and \(V^{*}(\bar{x},\bar{k}) = \varphi(\bar{x},\bar{k})\).

First, consider the admissible control \(\hat{\pi} = (\hat{Z}, \hat{I})\), where the shareholders decide to never invest or disinvest, while the dividend policy is defined by \(\hat{Z}_{t} = \eta\) for \(t \geq0\), with \(0 \leq\eta \leq \varepsilon\). Define the exit time \(\tau_{\varepsilon}=\inf\{t \geq0: (X_{t}^{\bar{x}},K_{t}^{\bar{k}}) \notin\overline{B}_{\varepsilon}(\bar {x},\bar{k}) \}\). We notice that \(\tau_{\varepsilon}< \tau_{0}\) for \(\varepsilon\) small enough. From the dynamic programming principle, we have

$$\begin{aligned} \varphi(\bar{x},\bar{k}) = V^{*}(\bar{x},\bar{k}) \geq& \mathbb{E}\bigg[\int_{0}^{\tau_{\varepsilon}\wedge h} e^{-r t}d\hat {Z}_{t} + e^{-r(\tau_{\varepsilon}\wedge h)} V^{*}\big(X_{\tau_{\varepsilon}\wedge h}^{\bar{x}},K_{\tau_{\varepsilon}\wedge h}^{\bar{k}}\big) \bigg] \\ \geq& \mathbb{E}\bigg[\int_{0}^{\tau_{\varepsilon}\wedge h} e^{-r t}d\hat{Z}_{t} + e^{-r(\tau_{\varepsilon}\wedge h)} \varphi\big(X_{\tau_{\varepsilon}\wedge h}^{\bar{x}},K_{\tau_{\varepsilon}\wedge h}^{\bar{k}}\big) \bigg]. \end{aligned}$$
(6.1)

Applying Itô’s formula to the process \((e^{-r t}\varphi(X_{t}^{\bar{x}},K_{t}^{\bar{k}}))\) between 0 and \(\tau_{\varepsilon}\wedge h\) and taking expectations, we obtain

$$\begin{aligned} \mathbb{E}\big[e^{-r (\tau_{\varepsilon}\wedge h)} \varphi(X_{\tau _{\varepsilon}\wedge h}^{\bar{x}},K_{\tau_{\varepsilon}\wedge h}^{\bar{k}}) \big] =& \varphi(\bar {x},\bar{k}) + \mathbb{E}\bigg[\int_{0}^{\tau_{\varepsilon}\wedge h} e^{-r t} \mathcal{L} \varphi (X_{t}^{\bar{x}},K_{t}^{\bar{k}}) dt \bigg] \\ &{} + \mathbb{E}\bigg[ \sum_{0< t \leq\tau_{\varepsilon}\wedge h} e^{-r t} \big(\varphi(X_{t}^{\bar{x}},K_{t}^{\bar{k}}) - \varphi(X_{t-}^{\bar{x}},K_{t}^{\bar {k}})\big) \bigg]. \end{aligned}$$
(6.2)

Combining (6.1) and (6.2), we have

$$\begin{aligned} &\mathbb{E}\bigg[\int_{0}^{\tau_{\varepsilon}\wedge h} e^{-r t}(- \mathcal{L}) \varphi (X_{t}^{\bar{x}},K_{t}^{\bar{k}}) dt \bigg] - \mathbb{E}\bigg[\int_{0}^{\tau_{\varepsilon}\wedge h} e^{-r t}d\hat {Z}_{t} \bigg] \\ &{}- \mathbb{E}\bigg[ \sum_{0< t \leq\tau_{\varepsilon}\wedge h} e^{-r t} \big(\varphi (X_{t}^{\bar{x}},K_{t}^{\bar{k}}) - \varphi(X_{t-}^{\bar{x}},K_{t}^{\bar {k}})\big) \bigg] \geq0. \end{aligned}$$
(6.3)
⋆:

First take \(\eta= 0\). We then observe that \(X\) is continuous on \([\![0, \tau_{\varepsilon}\wedge h]\!]\) and only the first term of (6.3) is nonzero. By dividing the above inequality by \(h\) and letting \(h \rightarrow0\), we conclude that \(- \mathcal{L} \varphi(\bar{x},\bar{k}) \geq0\).

⋆:

Now take \(\eta> 0\) in (6.3). We see that \(\hat{Z}\) jumps only at \(t = 0\) with size \(\eta\), so that

$$\mathbb{E}\bigg[\int_{0}^{\tau_{\varepsilon}\wedge h} e^{-r t}( - \mathcal{L} \varphi )(X_{t}^{\bar{x}},K_{t}^{\bar{k}}) dt \bigg] - \eta-\big(\varphi(\bar{x} - \eta,\bar{k}) - \varphi(\bar {x},\bar {k})\big) \geq0. $$

By sending \(h \rightarrow0\) and then dividing by \(\eta\) and letting \(\eta\rightarrow0\), we obtain

$$\frac{\partial\varphi}{\partial x}(\bar{x},\bar{k}) - 1 \geq0. $$

Second, consider the admissible control \(\bar{\pi} = (\bar{Z}, \bar{I})\), where the shareholders decide to never pay out dividends, while the investment/disinvestment policy is defined by \(\bar{I}_{t}=\eta\in\mathbb{R}\) for \(t\) ≥ 0, with \(0<|\eta| \leq \varepsilon\). Define again the exit time \(\tau_{\varepsilon}=\inf\{t \geq0: (X_{t}^{\bar{x}},K_{t}^{\bar{k}}) \notin\overline{B}_{\varepsilon}(\bar {x},\bar {k}) \}\). Proceeding analogously as in the first part and observing that \(\bar{I}\) jumps only at \(t=0\), we get

$$\mathbb{E}\bigg[\int_{0}^{\tau_{\varepsilon}\wedge h} e^{-r t}( - \mathcal{L} \varphi )(X_{t}^{\bar{x}},K_{t}^{\bar{k}}) dt \bigg] - \big(\varphi(\bar{x} - \gamma|\eta|,\bar{k}+\eta) - \varphi (\bar {x},\bar{k})\big) \geq0. $$

Assuming first \(\eta>0\), by sending \(h \rightarrow0\), and then dividing by \(\eta\) and letting \(\eta\rightarrow0\), we obtain

$$\gamma\frac{\partial\varphi}{\partial x}(\bar{x},\bar{k})-\frac {\partial\varphi}{\partial k}(\bar{x},\bar{k}) \ge0. $$

When \(\eta<0\), we get in the same manner

$$\gamma\frac{\partial\varphi}{\partial x}(\bar{x},\bar{k})+\frac {\partial\varphi}{\partial k}(\bar{x},\bar{k}) \ge0. $$

This proves the required supersolution property.

2. Subsolution property. We prove the subsolution property by contradiction. Suppose that the claim is not true. Then there exist \((\bar{x},\bar{k}) \in S\) and a neighbourhood \(B_{\varepsilon}(\bar {x},\bar{k})\) of \(\bar{x},\bar{k}\), included in \(S\) for \(\varepsilon\) small enough, a \(C^{2}\) function \(\varphi\) with \((\varphi- V^{*})(\bar{x},\bar{k})= 0\) and \(\varphi\geq V^{*}\) on \(B_{\varepsilon}(\bar{x},\bar{k})\), and \(\eta> 0\) such that we have, for all \((x,k) \in B_{\varepsilon}(\bar{x},\bar{k})\),

$$\begin{aligned} - \mathcal{L}\varphi(x,k) > \eta, \end{aligned}$$
(6.4)
$$\begin{aligned} \frac{\partial\varphi}{\partial x}(x,k) - 1 > \eta, \end{aligned}$$
(6.5)
$$\begin{aligned} \bigg(\gamma\frac{\partial\varphi}{\partial x}-\frac{\partial \varphi }{\partial k}\bigg)(x,k) > \eta, \end{aligned}$$
(6.6)
$$\begin{aligned} \bigg(\gamma\frac{\partial\varphi}{\partial x}+\frac{\partial \varphi }{\partial k}\bigg)(x,k) > \eta. \end{aligned}$$
(6.7)

For any admissible control \(\pi\), the exit time \(\tau_{\varepsilon}= \inf\{ t \geq0: (X_{t}^{\bar {x}},K_{t}^{\bar{k}}) \notin B_{\varepsilon}(\bar{x},\bar{k}) \}\) satisfies \(\tau_{\varepsilon}< \tau_{0}\). Applying Itô’s formula to the process \((e^{-r t}\varphi(X_{t}^{\bar{x}},K_{t}^{\bar{k}}))\) between 0 and \(\tau_{\varepsilon}{-}\), we have

$$\begin{aligned} \mathbb{E}\big[e^{-r \tau_{\varepsilon}-}\varphi(X_{\tau _{\varepsilon}-},K_{\tau _{\varepsilon}-})\big] =& \varphi(\bar{x},\bar{k}) - \mathbb{E}\bigg[\int _{0}^{\tau _{\varepsilon}-}e^{-ru}\mathcal{L}\varphi du\bigg] \\ &{}+ \mathbb{E}\bigg[\int_{0}^{\tau_{\varepsilon}-}e^{-r u}\bigg(-\gamma \frac {\partial\varphi}{\partial x} + \frac{\partial\varphi}{\partial k}\bigg)dI_{u}^{c,+}\bigg] \\ &{}+ \mathbb{E}\bigg[\int_{0}^{\tau_{\varepsilon}-}e^{-r u}\bigg(-\gamma \frac {\partial\varphi}{\partial x} - \frac{\partial\varphi}{\partial k}\bigg)dI_{u}^{c,-}\bigg] \\ &{}- \mathbb{E}\bigg[\int_{0}^{\tau_{\varepsilon}-}e^{-r u}\frac {\partial \varphi }{\partial x}dZ^{c}_{u}\bigg] \\ &{}+ \mathbb{E}\bigg[\sum_{0 < s < \tau_{\varepsilon}}e^{-r s}\big(\varphi (X_{s},K_{s}) - \varphi(X_{s-},K_{s-})\big)\bigg]. \end{aligned}$$

Using (6.4)–(6.7), we obtain

$$\begin{aligned} V^{*}(\bar{x},\bar{k}) = \varphi(\bar{x},\bar{k}) \geq& \eta\mathbb{E} \bigg[ \int_{0}^{\tau_{\varepsilon}-} e^{-r u} du + e^{-r \tau_{\varepsilon}-}\varphi\big(X_{\tau_{\varepsilon}-},K_{\tau _{\varepsilon}-}\big)\bigg]\\ &{}+ \eta\mathbb{E}\bigg[\int_{0}^{\tau_{\varepsilon}-}e^{-r u}dI_{u}^{c,+}\bigg] + \eta\mathbb{E}\bigg[\int_{0}^{\tau_{\varepsilon}-}e^{-r u}dI_{u}^{c,-}\bigg]\\ &{}+ (1+\eta)\mathbb{E}\bigg[\int_{0}^{\tau_{\varepsilon}-}e^{-r u}dZ^{c}_{u}\bigg]\\ &{}- \mathbb{E}\bigg[\sum_{0 < s < \tau_{\varepsilon}}e^{-r s}\big(\varphi (X_{s},K_{s}) - \varphi(X_{s-},K_{s-})\big)\bigg]. \end{aligned}$$

Note that \(\Delta X_{s} = -\Delta Z_{s} - \gamma( \Delta I_{s}^{+} + \Delta I_{s}^{-} )\), \(\Delta K_{s} = \Delta I_{s}^{+} - \Delta I_{s}^{-}\) and by the mean value theorem, there is some \(\theta\in(0,1)\) such that

$$\begin{aligned} &\varphi(X_{s},K_{s}) - \varphi(X_{s-},K_{s-}) \\ &\quad= \frac{\partial\varphi}{\partial x}(X_{s-} + \theta\Delta X_{s}, K_{s-} + \theta\Delta K_{s}) \Delta X_{s} \\ & \qquad{} +\frac{\partial\varphi}{\partial k}(X_{s-} + \theta \Delta X_{s}, K_{s-} + \theta\Delta K_{s})\Delta K_{s}\\ &\quad= \frac{\partial\varphi}{\partial x}(X_{s-} + \theta\Delta X_{s}, K_{s-} + \theta\Delta K_{s})\big(-\Delta Z_{s} - \gamma( \Delta I_{s}^{+} + \Delta I_{s}^{-} ) \big) \\ &\qquad{}+ \frac{\partial\varphi}{\partial k}(X_{s-} + \theta \Delta X_{s}, K_{s-} + \theta\Delta K_{s})(\Delta I_{s}^{+} - \Delta I_{s}^{-})\\ &\quad= - \frac{\partial\varphi}{\partial x}(X_{s-} + \theta\Delta X_{s}, K_{s-} + \theta\Delta K_{s})\Delta Z_{s}\\ &\qquad{}+ \bigg(- \gamma\frac{\partial\varphi}{\partial x}(X_{s-} + \theta\Delta X_{s}, K_{s-} + \theta\Delta K_{s}) \\ & \qquad\qquad+ \frac{\partial\varphi}{\partial k}(X_{s-} + \theta\Delta X_{s}, K_{s-} + \theta\Delta K_{s})\bigg) \Delta I_{s}^{+}\\ &\qquad{}+ \bigg(- \gamma\frac{\partial\varphi}{\partial x}(X_{s-} - \theta\Delta X_{s}, K_{s-} + \theta\Delta K_{s}) \\ &\qquad\qquad+ \frac{\partial\varphi}{\partial k}(X_{s-} + \theta\Delta X_{s}, K_{s-} + \theta\Delta K_{s})\bigg) \Delta I_{s}^{-}. \end{aligned}$$

Because \((X_{s}+\theta\Delta X_{s}, K_{s}+\theta\Delta K_{s}) \in B_{\varepsilon}(\bar {x},\bar{k})\), we use (6.5)–(6.7) again to get

$$- \big(\varphi(X_{s},K_{s}) - \varphi(X_{s-},K_{s-})\big) \geq(1+\eta) \Delta Z_{s} + \eta\Delta I_{s}^{+} + \eta\Delta I_{s}^{-}. $$

Therefore,

$$\begin{aligned} V^{*}(\bar{x},\bar{k}) \geq& \mathbb{E}\big[e^{-r \tau_{\varepsilon}-}\varphi (X_{\tau_{\varepsilon}-},K_{\tau_{\varepsilon}-})\big] + \mathbb {E}\bigg[\int _{0}^{\tau_{\varepsilon}-}e^{-r u} dZ_{u}\bigg]\\ &{}+ \eta\bigg( \mathbb{E} \bigg[ \int_{0}^{\tau_{\varepsilon}-} e^{-r u} du \bigg] + \mathbb{E}\bigg[\int_{0}^{\tau_{\varepsilon}-}e^{-ru}dI_{u}^{+}\bigg] + \mathbb {E}\bigg[\int_{0}^{\tau_{\varepsilon}-}e^{-r u}dI_{u}^{-}\bigg] \\ &{} + \mathbb{E}\bigg[\int_{0}^{\tau_{\varepsilon}-}e^{-r u}dZ_{u}\bigg] \bigg). \end{aligned}$$

Notice that while \((X_{\tau_{\varepsilon}-}, K_{\tau_{\varepsilon}-}) \in B_{\varepsilon}(\bar {x},\bar{k})\), \((X_{\tau_{\varepsilon}},K_{\tau_{\varepsilon}})\) is either on the boundary \(\partial B_{\varepsilon}(\bar{x},\bar{k})\) or outside of \(\bar{B}_{\varepsilon}(\bar{x},\bar{k})\). However, there is some random variable \(\alpha\) valued in \([0,1]\) such that

$$\begin{aligned} (X^{(\alpha)},K^{(\alpha)}) =& (X_{\tau_{\varepsilon}-},K_{\tau _{\varepsilon}-}) + \alpha(\Delta X_{\tau_{\varepsilon}}, \Delta K_{\tau_{\varepsilon}})\\ =& (X_{\tau_{\varepsilon}-},K_{\tau_{\varepsilon}-}) + \alpha(- \Delta Z_{\tau _{\varepsilon}} - \gamma\Delta I^{+}_{\tau_{\varepsilon}} - \gamma\Delta I^{-}_{\tau _{\varepsilon}}, \Delta I^{+}_{\tau_{\varepsilon}}- \Delta I^{-}_{\tau_{\varepsilon}}) \end{aligned}$$

is in \(\partial B_{\varepsilon}(\bar{x},\bar{k})\). Proceeding analogously as above, we show that

$$\varphi(X^{(\alpha)},K^{(\alpha)}) - \varphi(X_{\tau_{\varepsilon}-},K_{\tau_{\varepsilon}-}) \leq- \alpha\big((1+\eta) \Delta Z_{\tau_{\varepsilon}} + \eta \Delta I_{\tau_{\varepsilon}}^{+} + \eta\Delta I_{\tau_{\varepsilon}}^{-}\big). $$

Observe that

$$(X^{(\alpha)},K^{(\alpha)}) = (X_{\tau_{\varepsilon}},K_{\tau _{\varepsilon}}) + (1-\alpha )(\Delta Z_{\tau_{\varepsilon}} + \gamma\Delta I^{+}_{\tau _{\varepsilon}} + \gamma \Delta I^{-}_{\tau_{\varepsilon}}, -\Delta I^{+}_{\tau_{\varepsilon}} + \Delta I^{-}_{\tau_{\varepsilon}}). $$

Starting from \((X^{(\alpha)},K^{(\alpha)})\), the strategy that consists of investing \((1-\alpha)\Delta I_{\tau_{\varepsilon}}^{+}\) or disinvesting \((1-\alpha)\Delta I_{\tau_{\varepsilon}}^{-}\), depending on the sign of \(K^{(\alpha )}-K_{\tau_{\varepsilon}}\), and paying out \((1-\alpha)\Delta Z_{\tau _{\varepsilon}}\) as dividends leads to \((X_{\tau_{\varepsilon}},K_{\tau_{\varepsilon}})\), and therefore

$$V^{*}(X^{(\alpha)},K^{(\alpha)}) - V^{*}(X_{\tau_{\varepsilon}},K_{\tau _{\varepsilon}}) \geq (1- \alpha) \Delta Z_{\tau_{\varepsilon}}. $$

Using \(\varphi(X^{(\alpha)},K^{(\alpha)}) \geq V^{*}(X^{(\alpha )},K^{(\alpha)})\), we deduce

$$\varphi(X_{\tau_{\varepsilon}-},K_{\tau_{\varepsilon}-}) - V^{*}(X_{\tau_{\varepsilon}}, K_{\tau_{\varepsilon}}) \geq(1 + \alpha\eta) \Delta Z_{\tau _{\varepsilon}} + \alpha \eta(\Delta I^{+}_{\tau_{\varepsilon}} + \Delta I^{-}_{\tau _{\varepsilon}}). $$

Hence,

$$\begin{aligned} V^{*}(\bar{x},\bar{k}) \geq& \eta\bigg( \mathbb{E}\bigg[ \int _{0}^{\tau _{\varepsilon}-} e^{-r u} du \bigg] + \mathbb{E}\bigg[\int _{0}^{\tau_{\varepsilon }-}e^{-r u}dI_{u}^{+}\bigg] +\mathbb{E}\bigg[\int_{0}^{\tau _{\varepsilon}-}e^{-r u}dI_{u}^{-}\bigg] \\ & \phantom{\eta\Big(}+ \mathbb{E}\bigg[\int_{0}^{\tau _{\varepsilon}-}e^{-r u}dZ_{u}\bigg] + \mathbb{E}\big[e^{-r \tau_{\varepsilon}}\alpha (\Delta Z_{\tau _{\varepsilon}} + \gamma\Delta I_{\tau_{\varepsilon}}^{+} + \gamma \Delta I_{\tau _{\varepsilon }}^{-})\big]\bigg) \\ &{}+ \mathbb{E}\big[e^{-r \tau_{\varepsilon}}V^{*}(X_{\tau _{\varepsilon}},K_{\tau _{\varepsilon }})\big] + \mathbb{E}\bigg[\int_{0}^{\tau_{\varepsilon}}e^{-r u}dZ_{u}\bigg]. \end{aligned}$$
(6.8)

We now claim there is \(c_{0} > 0\) such that for any admissible strategy,

$$\begin{aligned} c_{0} \leq&\mathbb{E} \bigg[ \int_{0}^{\tau_{\varepsilon}-} e^{-r u} du + \int _{0}^{\tau_{\varepsilon}-}e^{-r u}dI_{u}^{+} + \int_{0}^{\tau _{\varepsilon}-}e^{-r u}dI_{u}^{-} + \int_{0}^{\tau_{\varepsilon}-}e^{-r u}dZ_{u}\bigg] \\ &{}+ \mathbb{E}\big[e^{-r \tau_{\varepsilon}}\alpha(\Delta Z_{\tau _{\varepsilon }} + \gamma\Delta I_{\tau_{\varepsilon}}^{+} + \gamma\Delta I_{\tau _{\varepsilon }}^{-})\big]. \end{aligned}$$
(6.9)

Let us consider the \(C^{2}\) function \(\phi(x,k) = c_{0}(1-\frac{(x-\bar {x})^{2}}{\varepsilon^{2}})\) with

$$0 < c_{0} \leq\min\bigg\{ \frac{\varepsilon}{2}, \frac{\varepsilon }{2 \gamma}, \frac {1}{r}, \frac{\varepsilon^{2}}{\sigma_{n}^{2}\bar{\beta}^{2}}, \frac {\varepsilon }{2d_{\max}}\bigg\} , $$

where

$$d_{\max} = \sup\bigg\{ \frac{|\beta(k)\mu- \alpha ((k-x)^{+})|}{\varepsilon}: (x,k) \in B_{\varepsilon}(\bar{x},\bar{k})\bigg\} >0. $$

This function satisfies

$$\left\{ \begin{aligned} &\phi(\bar{x},\bar{k}) = c_{0},\\ &\phi= 0 \quad \text{for } (x,k) \in\partial B_{\varepsilon},\\ &\min\bigg\{ 1 - \mathcal{L}\phi, 1 -\gamma\frac{\partial\phi }{\partial x} + \frac{\partial\phi}{\partial k}, 1 -\gamma\frac {\partial\phi}{\partial x} - \frac{\partial\phi}{\partial k}, 1 - \frac{\partial\phi}{\partial x}\bigg\} \geq0\quad \text{for } (x,k) \in B_{\varepsilon}. \end{aligned} \right. $$

Applying Itô’s formula, we have

$$\begin{aligned} 0 < c_{0} =& \phi(\bar{x},\bar{k}) \\ \leq&\mathbb{E}\big[e^{-r \tau_{\varepsilon}-}\phi(X_{\tau _{\varepsilon }-},K_{\tau _{\varepsilon}-})\big] + \mathbb{E}\bigg[\int_{0}^{\tau _{\varepsilon}-}e^{-r u}du\bigg] \\ &{}+ \mathbb{E}\bigg[\int_{0}^{\tau_{\varepsilon}-}e^{-r u}dI^{+}_{u}\bigg]+ \mathbb {E}\bigg[\int_{0}^{\tau_{\varepsilon}-}e^{-r u}dI^{-}_{u}\bigg]+ \mathbb {E}\bigg[\int _{0}^{\tau_{\varepsilon}-}e^{-r u}dZ_{u}\bigg]. \end{aligned}$$
(6.10)

Noting that \(\frac{\partial\phi}{\partial x} \leq1\) and \(\frac {\partial\phi}{\partial k}= 0\), we have

$$\phi(X_{\tau_{\varepsilon}-},K_{\tau_{\varepsilon}-}) - \phi (X^{(\alpha )},K^{(\alpha )}) \leq X_{\tau_{\varepsilon}-} - X^{(\alpha)} = \alpha(\Delta Z_{\tau _{\varepsilon }} + \gamma\Delta I^{+}_{\tau_{\varepsilon}} + \gamma\Delta I^{-}_{\tau_{\varepsilon}}). $$

Plugging this into (6.10) with \(\phi(X^{(\alpha )},K^{(\alpha )})=0\), we obtain

$$\begin{aligned} c_{0} \leq&\mathbb{E} \bigg[ \int_{0}^{\tau_{\varepsilon}-} e^{-r u} du + \int _{0}^{\tau_{\varepsilon}-}e^{-r u}dI_{u}^{+} + \int_{0}^{\tau _{\varepsilon}-}e^{-r u}dI_{u}^{-} + \int_{0}^{\tau_{\varepsilon}-}e^{-r u}dZ_{u}\bigg]\\ &{}+ \mathbb{E}\big[e^{-r \tau_{\varepsilon}}\alpha(\Delta Z_{\tau _{\varepsilon }} + \gamma\Delta I_{\tau_{\varepsilon}}^{+} + \gamma\Delta I_{\tau _{\varepsilon}}^{-}) \big]. \end{aligned}$$

This proves the claim (6.9). Finally, by taking the supremum over \(\pi\) and using the dynamic programming principle, (6.8) implies \(V^{*}(\bar{x},\bar{k}) \geq V^{*}(\bar{x},\bar{k}) + \eta c_{0}\), which is a contradiction.

3. Uniqueness. Suppose \(u\) is a continuous subsolution and \(w\) a continuous supersolution of (5.1) on \(S\) satisfying the boundary conditions

$$u(x,0)\le w(x,0), \qquad u(\gamma k,k)\le w(\gamma k,k)\quad\mbox{for } (x,k) \in S, $$

and the linear growth condition

$$|u(x,k)|+|w(x,k)|\le C_{1} +C_{2}(x+k) \quad\text{for } (x,k) \in S, $$

for some positive constants \(C_{1}\) and \(C_{2}\). We show by adapting some standard arguments that \(u \le w\).

Step 1. We first construct a strict supersolution of (5.1) with a perturbation of \(w\). Set

$$h(x,k) = A + B x + Ck + Dxk + Ex^{2} + k^{2} $$

with

$$ A = \frac{1+\mu\bar{\beta} B + \sigma^{2} \bar{\beta}^{2}E }{r} + C_{1} $$
(6.11)

and

$$B = 2 + \frac{1+C}{\gamma} + \frac{2\mu\bar{\beta}E}{r},\qquad C = \frac{\mu\bar{\beta}D}{r},\qquad D = 2\gamma E,\qquad E = \frac{1}{\gamma^{2}}, $$

and define for \(\lambda\in[0,1]\) on \(S\) the continuous function

$$w^{\lambda}=(1-\lambda)w+\lambda h. $$

Because

$$\begin{aligned} \frac{\partial h}{\partial x} - 1 &= B+Dk+2Ex - 1 \geq1, \\ \gamma\frac{\partial h}{\partial x} - \frac{\partial h}{\partial k} &= \gamma(B+Dk+2Ex)-(C+Dx+2k) \geq1, \\ \gamma\frac{\partial h}{\partial x} + \frac{\partial h}{\partial k} &= \gamma(B+Dk+2Ex)+ (C+Dx+2k)\geq1 \end{aligned}$$

and

$$\begin{aligned} -\mathcal{L}h = &-\Big(\beta(k)\mu- \alpha\big((k-x)^{+}\big)\Big)(B+Dk+2Ex) \\ &{}- \frac{\sigma^{2}\beta(k)^{2}}{2}2E + r (A + Bx + Ck + Dxk + Ex^{2} + k^{2})\\ \geq& \big(r A - \beta(k)\mu B - \sigma^{2} \beta(k)^{2}E\big) + \big(r B - 2 \mu\beta(k)E\big)x + \big(rC- \mu\beta(k)D\big)k\\ \geq&1, \end{aligned}$$

we have that

$$\min\bigg\{ -\mathcal{L} h, \frac{\partial h}{\partial x} - 1, \gamma \frac{\partial h}{\partial x} - \frac{\partial h}{\partial k}, \gamma \frac{\partial h}{\partial x} + \frac{\partial h}{\partial k}\bigg\} \geq1 $$

which implies that \(w^{\lambda}\) is a strict supersolution of (5.1). To prove this point, one only needs to take \(\bar{x}\) and \(\varphi\in C^{2}\) such that \(\bar{x}\) is a minimum of \(w^{\lambda}- \varphi\) and notice that \(\bar{x}\) is also a minimum of \(w^{\lambda}- \varphi_{2}\) with \(\varphi_{2} = \frac{\varphi- \lambda h}{1-\lambda}\), which allows us to use that \(w\) is a viscosity supersolution of (5.1).

Step 2. In order to prove the strong comparison result, it suffices to show that for every \(\lambda\in[0,1]\),

$$ \sup_{S} (u-w^{\lambda}) \le0. $$
(6.12)

Assume by way of contradiction that there exists \(\lambda\) such that

$$ \sup_{S} (u-w^{\lambda}) > 0. $$
(6.13)

Because \(u\) and \(w\) have linear growth, we have

$$\displaystyle{\lim_{|(x,k)|\to\infty}}(u-w^{\lambda})(x,k)= -\infty. $$

Using the boundary conditions

$$\begin{aligned} u(x,0)-w^{\lambda}(x,0) = & (1- \lambda) \big(u(x,0) -w(x,0)\big) \\ &{}+ \lambda\big( u(x,0) - (A + Bx + E x^{2})\big), \\ \le& \lambda\big(u(x,0) - (A + Bx + E x^{2})\big),\\ u(\gamma k,k)-w^{\lambda}(\gamma k,k) \le& \lambda\Big(u(\gamma k,k) \\ &\phantom{\lambda\big(}{} - \big( A + (B \gamma+ C) k + ( D \gamma + E \gamma^{2} +1 )k^{2}\big)\Big) \end{aligned}$$

and the linear growth condition, it is always possible to find \(C_{1}\) in (6.11) such that both expressions above are negative and the maximum in (6.13) is reached inside the domain \(S\). By continuity of the functions \(u\) and \(w^{\lambda}\), there exists a pair \((x_{0},k_{0})\) with \(x_{0} \ge\gamma k_{0}\) such that

$$M=\sup_{S} (u-w^{\lambda})=(u-w^{\lambda})(x_{0},k_{0}). $$

For \(\epsilon> 0\), consider the functions

$$\begin{aligned} \varPhi_{\epsilon}(x,y,k,\ell) &= u(x,k) - w^{\lambda}(y,\ell) - \phi _{\epsilon}(x,y,k,\ell),\\ \phi_{\epsilon}(x,y,k,\ell) &= \frac{1}{2\epsilon}\big(|x-y|^{2}+|k-\ell |^{2}\big) + \frac{1}{4}\big(|x-x_{0}|^{4}+|k-k_{0}|^{4}\big). \end{aligned}$$

By standard arguments in the comparison principle of the viscosity solution theory (see Pham [26, Sect. 4.4.2]), the function \(\varPhi_{\varepsilon}\) attains a maximum in \((x_{\epsilon},y_{\epsilon},k_{\epsilon},\ell_{\epsilon})\), which converges (up to a subsequence) to \((x_{0}, k_{0},x_{0},k_{0})\) when \(\varepsilon\) goes to zero. Moreover,

$$\begin{aligned} \lim_{\epsilon\rightarrow\infty}\frac{|x_{\epsilon}-y_{\epsilon}|^{2}+|k_{\epsilon}-\ell_{\epsilon}|^{2}}{2\epsilon} = 0. \end{aligned}$$

Applying Theorem 3.2 in Crandall et al. [7], we get the existence of symmetric square matrices \(M_{\varepsilon}\), \(N_{\varepsilon}\) of size 2 such that

$$\begin{aligned} (p_{\varepsilon}, M_{\varepsilon}) \;\in\; J^{2,+} u(x_{\varepsilon},k_{\varepsilon}), \\ (q_{\varepsilon}, N_{\varepsilon}) \;\in\; J^{2,-} w^{\lambda}(y_{\varepsilon},\ell_{\varepsilon}) \end{aligned}$$

and

$$\begin{aligned} \left( \textstyle\begin{array}{cc} M_{\varepsilon}& 0 \\ 0 & -N_{\varepsilon}\end{array}\displaystyle \right) \; \leq\; D^{2} \phi_{\varepsilon}(x_{\varepsilon}, k_{\varepsilon},y_{\epsilon},\ell_{\epsilon}) + \varepsilon\big(D^{2} \phi_{\epsilon}(x_{\varepsilon}, k_{\varepsilon},y_{\epsilon},\ell _{\epsilon})\big)^{2}, \end{aligned}$$
(6.14)

where

$$\begin{aligned} p_{\varepsilon}&=D_{x,k}\phi_{\epsilon}(x_{\epsilon},k_{\epsilon},y_{\epsilon},\ell _{\epsilon}) = \bigg(\frac{x_{\epsilon}- y_{\epsilon}}{\epsilon} + (x_{\epsilon}- x_{0})^{3},\frac{k_{\epsilon}- \ell_{\epsilon}}{\epsilon} + (k_{\epsilon}- k_{0})^{3}\bigg),\\ q_{\varepsilon}&=-D_{y,\ell}\phi_{\epsilon}(x_{\epsilon},k_{\epsilon},y_{\epsilon},\ell _{\epsilon}) = \bigg(\frac{x_{\epsilon}- y_{\epsilon}}{\epsilon},\frac {k_{\epsilon}- \ell_{\epsilon}}{\epsilon}\bigg) \end{aligned}$$

and

$$D^{2} \phi_{\varepsilon}(x_{\varepsilon}, k_{\varepsilon},y_{\epsilon},\ell_{\epsilon}) = \frac {1}{\epsilon} \left( \textstyle\begin{array}{c@{\quad}c} I_{2} & -I_{2}\\ -I_{2} & I_{2} \end{array}\displaystyle \right) + \left( \textstyle\begin{array}{c@{\quad}c@{\quad}c@{\quad}c} 3(x_{\epsilon}- x_{0})^{2} & 0& 0& 0\\ 0 & 3(k_{\epsilon}- k_{0})^{2} & 0& 0\\ 0&0&0&0\\ 0&0&0&0 \end{array}\displaystyle \right), $$

so that

$$\begin{aligned} &D^{2} \phi_{\varepsilon}(x_{\varepsilon}, k_{\varepsilon},y_{\epsilon},\ell_{\epsilon}) + \varepsilon \big(D^{2} \phi_{\epsilon}(x_{\varepsilon}, y_{\varepsilon},k_{\epsilon},\ell _{\epsilon})\big)^{2} \\ &= \frac{3}{\epsilon} \left( \textstyle\begin{array}{c@{\quad}c} I_{2} & -I_{2}\\ -I_{2} & I_{2} \end{array}\displaystyle \right)M_{\epsilon}\\ &\phantom{=:}+ \left( \textstyle\begin{array}{c@{\quad}c@{\quad}c@{\quad}c} 9(x_{\epsilon}- x_{0})^{2}(1 + \epsilon(x_{\epsilon}- x_{0})^{2}) & 0& 0& 0\\ 0 & 9(k_{\epsilon}- k_{0})^{2}(1 + \epsilon(k_{\epsilon}- k_{0})^{2}) & 0& 0\\ 0&0&0&0\\ 0&0&0&0 \end{array}\displaystyle \right). \end{aligned}$$

Equation (6.14) implies that

$$\begin{aligned} &\text{tr} \bigg(\frac{\sigma^{2}\beta(k_{\epsilon})^{2}}{2}M_{\epsilon}- \frac {\sigma^{2}\beta(\ell_{\epsilon})^{2}}{2}N_{\epsilon}\bigg) \\ & \leq\frac{3\sigma^{2}}{2\epsilon}\big(\beta(k_{\epsilon})^{2} - \beta (\ell _{\epsilon})^{2}\big) + \frac{9\sigma^{2} \beta(k_{\epsilon})^{2}}{2} (x_{\epsilon}- x_{0})^{2}\big(1+\epsilon(x_{\epsilon}- x_{0})^{2}\big). \end{aligned}$$
(6.15)

Because \(u\) and \(w^{\lambda}\) are respectively a subsolution and a strict supersolution, we have

$$\begin{aligned} \min\bigg\{ &-\Big(\beta(k_{\epsilon})\mu- \alpha\big((k_{\epsilon}-x_{\epsilon})^{+}\big)\Big) \bigg(\frac{x_{\epsilon}-y_{\epsilon}}{\epsilon }+(x_{\epsilon}-x_{0})^{3}\bigg) \\ & -\text{tr}\bigg(\frac{\sigma^{2} \beta(k_{\epsilon})^{2}}{2}M_{\epsilon}\bigg) + r u(x_{\epsilon},k_{\epsilon}), \frac{x_{\epsilon}-y_{\epsilon}}{\epsilon }+(x_{\epsilon}-x_{0})^{3} - 1,\\ &\gamma\bigg(\frac{x_{\epsilon}-y_{\epsilon}}{\epsilon}+(x_{\epsilon}-x_{0})^{3}\bigg) - \bigg(\frac{k_{\epsilon}-\ell_{\epsilon}}{\epsilon }+(k_{\epsilon}-k_{0})^{3}\bigg),\\ &\gamma\bigg(\frac{x_{\epsilon}-y_{\epsilon}}{\epsilon}+(x_{\epsilon}-x_{0})^{3}\bigg) + \bigg(\frac{k_{\epsilon}-\ell_{\epsilon}}{\epsilon }+(k_{\epsilon}-k_{0})^{3}\bigg) \bigg\} \leq0 \end{aligned} $$

and

$$\begin{aligned} \min\bigg\{ &-\Big(\beta(\ell_{\epsilon})\mu- \alpha\big((\ell _{\epsilon}-y_{\epsilon})^{+}\big)\Big) \frac{x_{\epsilon}-y_{\epsilon}}{\epsilon} - \text{tr}\bigg(\frac{\sigma^{2} \beta(\ell_{\epsilon})^{2}}{2}N_{\epsilon}\bigg) + r w^{\lambda}(y_{\epsilon},\ell_{\epsilon}), \\ &\frac{x_{\epsilon}-y_{\epsilon}}{\epsilon} - 1,\gamma\frac {x_{\epsilon}-y_{\epsilon}}{\epsilon}- \frac{k_{\epsilon}-\ell_{\epsilon}}{\epsilon },\gamma\frac{x_{\epsilon}-y_{\epsilon}}{\epsilon}+ \frac{k_{\epsilon}-\ell _{\epsilon}}{\epsilon} \bigg\} \geq\lambda. \end{aligned}$$
(6.16)

We then distinguish the following four cases:

Case 1.:

If \(\frac{x_{\epsilon}-y_{\epsilon}}{\epsilon }+(x_{\epsilon}-x_{0})^{3} - 1 \leq0\), we get from (6.16) that \(\lambda + (x_{\epsilon}- x_{0})^{3} \leq0\), yielding a contradiction when \(\epsilon \) goes to 0.

Case 2.:

If \(\gamma(\frac{x_{\epsilon}-y_{\epsilon}}{\epsilon }+(x_{\epsilon}-x_{0})^{3}) - (\frac{k_{\epsilon}-\ell_{\epsilon}}{\epsilon }+(k_{\epsilon}-k_{0})^{3}) \leq0\), we get from (6.16) that \(\lambda+ \gamma((x_{\epsilon}-x_{0})^{3} -(k_{\epsilon}-k_{0})^{3}) \leq0\), yielding a contradiction when \(\epsilon\) goes to 0.

Case 3.:

If \(\gamma(\frac{x_{\epsilon}-y_{\epsilon}}{\epsilon }+(x_{\epsilon}-x_{0})^{3}) + (\frac{k_{\epsilon}-l_{\epsilon}}{\epsilon }+(k_{\epsilon}-k_{0})^{3}) \leq0\), we get from (6.16) that \(\lambda+ \gamma((x_{\epsilon}-x_{0})^{3} +(k_{\epsilon}-k_{0})^{3} )\leq 0\), yielding a contradiction when \(\epsilon\) goes to 0.

Case 4.:

If

$$\begin{aligned} &-\Big(\beta(k_{\epsilon})\mu- \alpha\big((k_{\epsilon}-x_{\epsilon})^{+}\big)\Big) \bigg(\frac{x_{\epsilon}-y_{\epsilon}}{\epsilon}+(x_{\epsilon}-x_{0})^{3}\bigg) \\ &- \text{tr}\bigg(\frac{\sigma^{2} \beta(k_{\epsilon})^{2}}{2}M_{\epsilon}\bigg) + r u(x_{\epsilon},k_{\epsilon}) \leq0, \end{aligned}$$

we deduce from

$$-\Big(\beta(\ell_{\epsilon})\mu- \alpha\big((\ell_{\epsilon}-y_{\epsilon})^{+}\big)\Big) \frac{x_{\epsilon}-y_{\epsilon}}{\epsilon} - \text{tr}\bigg(\frac{\sigma^{2} \beta(\ell_{\epsilon})^{2}}{2}N_{\epsilon}\bigg) + r w^{\lambda}(y_{\epsilon},\ell_{\epsilon}) \geq\lambda $$

that

$$\begin{aligned} &\frac{x_{\epsilon}-y_{\epsilon}}{\epsilon}\Big(\mu\big(\beta(\ell _{\epsilon})-\beta(k_{\epsilon})\big)+\alpha\big((k_{\epsilon}-x_{\epsilon})^{+}\big)-\alpha\big((\ell_{\epsilon}-y_{\epsilon})^{+}\big)\Big)\\ &-\text{tr}\bigg(\frac{\sigma^{2}\beta(k_{\epsilon})^{2}}{2}N_{\epsilon}\bigg) + \text{tr}\bigg(\frac{\sigma^{2}\beta(k_{\epsilon})^{2}}{2}N_{\epsilon}\bigg) \\ &-\Big(\beta(k_{\epsilon})\mu- \alpha\big((k_{\epsilon}-x_{\epsilon})^{+}\big)\Big) (x_{\epsilon}-x_{0})^{3} \\ &+ r\big(u(x_{\epsilon},k_{\epsilon}) - w^{\lambda}(y_{\epsilon},\ell _{\epsilon})\big)\leq-\lambda. \end{aligned}$$

Using (6.15), we get

$$\begin{aligned} &\frac{x_{\epsilon}-y_{\epsilon}}{\epsilon}\Big(\mu\big(\beta(\ell _{\epsilon})-\beta(k_{\epsilon})\big)+\alpha\big((k_{\epsilon}-x_{\epsilon})^{+}\big)-\alpha\big((\ell_{\epsilon}-y_{\epsilon})^{+}\big)\Big)\\ &-\Big(\beta(k_{\epsilon})\mu- \alpha\big((k_{\epsilon}-x_{\epsilon})^{+}\big)\Big) (x_{\epsilon}-x_{0})^{3} + r\big(u(x_{\epsilon},k_{\epsilon}) - w^{\lambda}(y_{\epsilon},\ell_{\epsilon})\big) \\ &\leq-\lambda+ \frac{3\sigma^{2}}{2\epsilon}\big(\beta(k_{\epsilon})^{2} - \beta(\ell_{\epsilon})^{2}\big) + \frac{9\sigma^{2} \beta(k_{\epsilon})^{2}}{2} (x_{\epsilon}- x_{0})^{2}\big(1+\epsilon(x_{\epsilon}- x_{0})^{2}\big). \end{aligned} $$

By sending \(\varepsilon\) to zero and using the continuity of \(u\), \(w^{\gamma}_{i}\), \(\alpha\) and \(\beta\) we obtain the required contradiction, namely \(r M \leq-\lambda\).

We have thus shown for every \(\lambda\in[0,1]\) that (6.12) holds, i.e., \(\sup_{S} (u - w^{\lambda}) \leq0\). This implies by letting \(\lambda\to0\) the strong comparison result \(u \leq w\) for any subsolution \(u\) and supersolution \(w\). Clearly, this strong comparison result implies uniqueness. This ends the proof of Theorem 5.4.  □

1.2 6.2 Proof of Proposition 5.5

Because \(\beta\) is concave and \(\beta'\) goes to 0, the existence of \(a\) is equivalent to assuming

$$ \sigma^{2}\beta'(0) \ge\frac{\mu}{1-\delta}. $$
(6.17)

Let us define the function \(w_{A}\) for \(A>0\) as the unique solution on \((a, \infty)\) of the Cauchy problem

$$\mu\beta(x)w_{A}'(x) + \frac{\sigma^{2}\beta(x)^{2}}{2}w_{A}''(x) - r w_{A}(x) = 0, $$

with \(w_{A}(x)=Ax^{\delta}\) for \(0 \le x \le a\) and \(w_{A}\) differentiable at \(a\).

Remark 6.1

The above Cauchy problem is well defined with the condition that \(w_{A}\) is differentiable at \(a\). Moreover, it is easy to check, using the definition of \(a\), that the function \(w_{A}\) is also \(C^{2}\). Because the spread \(\alpha\) is high, the shareholders optimally choose not to tap the credit line, but rather adjust costlessly their level of investment.

Lemma 6.2

For every \(A>0\), the function \(w_{A}\) is increasing.

Proof

Clearly, \(w_{A}\) is increasing and therefore positive on \([0,a]\). If we define \(c\) by \(c=\min\{x> a:w'_{A}(c)=0\}\), then \(w_{A}(c)>0\) because \(w_{A}\) is increasing and positive in a left neighbourhood of \(c\). Thus, according to the differential equation, we have \(w_{A}''(c)\ge0\) which implies that \(w_{A}\) is also increasing in a right neighbourhood of \(c\). Therefore, \(w'_{A}\) cannot become negative. □

Lemma 6.3

For every \(A>0\), there is some \(b_{A}\) such that \(w''_{A}(b_{A})=0\) and \(w_{A}\) is a concave function on \((a, b_{A})\).

Proof

Assume by way of contradiction that \(w_{A}''\) does not vanish. Using (5.6) and (5.5), we have

$$\frac{\sigma^{2}\beta^{2}(a)}{2} w_{A}''(a)=-rAa^{\delta}. $$

Therefore, we equivalently assume that \(w_{A}''<0\). This implies that \(w_{A}'\) is strictly decreasing and bounded below by 0 by Lemma 6.2; therefore \(w_{A}\) is an increasing concave function. Therefore, \({\lim_{x\to\infty}}w_{A}'(x)\) exists and is denoted by \(\ell \). Letting \(x \to\infty\) in the differential equation, we obtain, because \(\beta\) has a finite limit,

$$\frac{\sigma^{2}\bar{\beta}^{2}}{2}\lim_{x\to\infty}w_{A}''(x)=r\lim _{x\to \infty}w_{A}(x)-\mu\bar{\beta} \ell. $$

Therefore, either \(\lim_{x\to\infty}w_{A}(x)\) is \(+\infty\), from which we get a contradiction, or finite, from which we get \({\lim_{x\to \infty }}w_{A}''(x)=0\) by the mean value theorem. In the second case, differentiating the differential equation, we have

$$\begin{aligned} &\mu\beta'(x)w_{A}'(x)+\mu\beta(x)w_{A}''(x) + \sigma^{2}\beta'(x)\beta (x)w_{A}''(x) \\ &+ \frac{\sigma^{2}\beta(x)^{2}}{2}w_{A}'''(x)-r w_{A}'(x) = 0. \end{aligned}$$
(6.18)

Proceeding analogously, we obtain that \({\lim_{x\to\infty }}w_{A}'''(x)=0\) and thus \(\ell=0\). Coming back to the differential equation, we get

$$0=r\lim_{x\to\infty}w_{A}(x), $$

contradicting the fact that \(w_{A}\) is increasing. Setting \(b_{A}:=\inf\{x \ge a:w_{A}''(x)=0\}\) allows us to conclude. □

Lemma 6.4

There exists \(A^{*}\) such that \(w'_{A^{*}}(b_{A^{*}})=1\).

Proof

For every \(A>0\), we have

$$ \mu\beta(b_{A})w'_{A}(b_{A})=rw_{A}(b_{A}). $$
(6.19)

Let \(A_{1}=\frac{\mu\bar{\beta}}{ra^{\delta}}\). Lemma 6.2 yields

$$ w_{A_{1}}(b_{A_{1}})\ge w_{A_{1}}(a) =\frac{\mu\bar{\beta}}{r} \ge\frac{\mu\beta(b_{A_{1}})}{r}. $$

Therefore, (6.19) yields \(w'_{A_{1}}(b_{A_{1}})\ge1\). On the other hand, let \(A_{2}=\frac{a^{1-\delta}}{\delta}\). By construction, \(w'_{A_{2}}(a)=1\) and thus \(w'_{A_{2}}(b_{A_{2}}) \le1\) by the concavity of \(w_{A}\) on \((0,b_{A})\). Thus, there is some \(A^{*} \in[\min (A_{1},A_{2}),\max(A_{1},A_{2})]\) such that \(w'_{A^{*}}=1\). □

Hereafter, we set \(b=b_{A^{*}}\).

Lemma 6.5

We have \(\mu\beta'(b)\le r\).

Proof

Differentiating the differential equation for \(w_{A}\) and plugging in \(x=b\), we get

$$\frac{\sigma^{2}\beta(b)^{2}}{2}w_{A}'''(b)+\mu\beta'(b)-r=0. $$

Because \(w''_{A^{*}}\) is increasing in a left neighbourhood of \(b\), we have \(w_{A}'''(b) \ge0\), implying the result. □

Let us define

$$v(x)= \left\{ \textstyle\begin{array}{l@{\quad}l} w_{A^{*}}(x),& x\le b,\\ x-b+\frac{\mu\beta(b)}{r},& x \ge b. \end{array}\displaystyle \right. $$

We are in a position to prove the following result.

Proposition 6.6

The shareholders’ value is \(v\).

Proof

We have to check that \((v,b)\) satisfies the HJB free boundary problem

$$ \max\big\{ \max_{k} \mathcal{L}_{k} v(x),1-v'(x)\big\} =0. $$
(6.20)

By construction, \(v\) is a \(C^{2}\) concave function on \((0, \infty)\) satisfying \(v'\ge1\). It remains to check that \(\max_{k}\mathcal {L}_{k}v(x)\le0\). For \(x >b\), we have

$$\mathcal{L}_{k}v(x)=\mu\beta(k) -\alpha\big((k-x)^{+}\big)-\mu\beta (b) -r(x-b). $$

If \(k\le x\), concavity of \(\beta\) and Lemma 6.5 imply

$$ \mathcal{L}_{k}v(x)= \mu\big(\beta(x)-\beta(b)\big)-r(x-b) \le\big(\mu\beta'(b)-r\big)(x-b) \le0. $$

If \(k \ge x\), we differentiate \(\mathcal{L}_{k}v(x)\) with respect to \(k\) and obtain, using again concavity of \(\beta\) and convexity of \(\alpha\), that

$$\frac{\partial\mathcal{L}_{k}v(x)}{\partial k}=\mu\beta'(k)-\alpha '(k-x)\le\mu\beta'(0)-\alpha'(0) \le0. $$

Therefore, \(\mathcal{L}_{k}v(x) \le\mathcal{L}_{x}v(x)\le0\).

Let \(x < b\). Because \(v\) is concave, the same argument as in the previous lines shows that

$$\frac{\partial\mathcal{L}_{k} v(x)}{\partial k}\le0 \quad\mbox{ for } k \ge x $$

and therefore

$$\max_{k\ge0}\mathcal{L}_{k}v(x)=\max_{k \le x}\mathcal{L}_{k}v(x). $$

The first order condition gives for \(0 \leq k < x\) that

$$ \frac{\partial}{\partial k}(\mathcal{L}_{k}v)(x) = \mu\beta'(k) v'(x) + \sigma^{2}\beta'(k)\beta(k)v''(x)= \beta'(k)\big(\mu v'(x) + \sigma ^{2}\beta (k)v''(x)\big). $$

Thus for \(0< x< a\), we have

$$\frac{\partial}{\partial k}(\mathcal{L}_{k}v)(x)= \beta '(k)A^{*}x^{\delta -2}\delta\big(\mu x+ \sigma^{2}\beta(k)(\delta-1)\big), $$

which gives

$$\left\{ \begin{aligned} \frac{\partial}{\partial k}(\mathcal{L}_{k}v)(x) \bigg|_{k=0}> 0,\\ \frac{\partial}{\partial k}(\mathcal{L}_{k}v)(x)\bigg|_{k=x} < 0. \end{aligned} \right. $$

Therefore the maximum \(k^{*}(x)\) of \(\mathcal{L}_{k}v(x)\) lies in the interior of the interval \([0,x]\) and satisfies

$$\beta\big(k^{*}(x)\big) = \frac{\mu x}{\sigma^{2}(1-\delta)} \quad \text{for all } 0 < x < a. $$

Hence, for \(x \le a\), we have by construction

$$ \max_{0\leq k\leq x}\mathcal{L}_{k}v(x) = \frac{\mu^{2}x}{\sigma ^{2}(1-\delta )}A^{*}\delta x^{\delta-1} + \frac{\sigma^{2}\mu^{2}x^{2}}{2\sigma ^{4}(1-\delta )^{2}}A^{*}\delta(\delta-1)x^{\delta-2}-r A^{*}x^{\delta}=0 . $$

Now fix \(x \in(a,b)\). Note that \({\frac{\partial}{\partial k}}(\mathcal{L}_{k}v)(x)\) has the same sign as \(\mu v'(x) + \sigma^{2}\beta(k)v''(x) \) because \(\beta\) is strictly increasing. Moreover, since \(v\) is concave and \(\beta\) increasing, we have

$$\min_{0 \leq k \leq x} \mu v'(x) + \sigma^{2}\beta(k)v''(x) = \mu v'(x) + \sigma^{2}\beta(x)v''(x). $$

Thus, it suffices to prove \(\mu v'(x) + \sigma^{2}\beta(x)v''(x) \geq0\) for \(x \in(a,b)\), or equivalently, because \(\beta\) is a positive function, that the function \(\phi\) defined as

$$\phi(x) = \mu\beta(x) v'(x) + \sigma^{2}\beta(x)^{2}v''(x) $$

is positive. We make a proof by contradiction, assuming there is some \(x\) such that \(\phi(x) < 0\). As \(\phi(a) = 0\) by (5.5) and \(\phi(b) > 0\), there is some \(x_{1} \in[a,b]\) such that

$$\left\{ \begin{aligned} \phi(x_{1}) &< 0,\\ \phi'(x_{1}) &= 0. \end{aligned} \right. $$

Using the differential equation (6.18) satisfied by \(v'\), we obtain

$$\phi'(x_{1}) = \big(2 r - \mu\beta'(x_{1})\big)v'(x_{1}) - \mu\beta(x_{1}) v''(x_{1}) = 0, $$

from which we deduce that

$$\begin{aligned} \phi(x_{1}) &= \mu\beta(x_{1}) v'(x_{1}) + \sigma^{2}\beta(x_{1})^{2}v''(x_{1}) \\ &= \mu\beta(x_{1}) v'(x_{1}) + \frac{\sigma^{2}\beta(x_{1})}{\mu} \big(2 r - \mu\beta'(x_{1})\big)v'(x_{1}) \\ &= \beta(x_{1}) v'(x_{1}) \bigg( \mu+ \frac{2r\sigma^{2}}{\mu} - \sigma^{2} \beta'(x_{1}) \bigg). \end{aligned} $$

But \(x_{1} \geq a\) and thus \(\beta'(x_{1}) \leq\beta'(a)\). Moreover, by the definition of \(a\), we have \(\sigma^{2}\beta'(a) \le\frac{ \mu }{1-\delta}\). Therefore, (5.6) yields

$$\begin{aligned} \phi(x_{1}) &\geq\beta(x_{1}) v'(x_{1}) \bigg(\mu+ \frac{2r\sigma ^{2}}{\mu} - \frac{\mu}{1-\delta}\bigg)\\ &\geq\beta(x_{1}) v'(x_{1}) \bigg( \frac{2r\sigma^{2}}{\mu} - \mu\frac {\delta}{1-\delta}\bigg)\\ &\geq\beta(x_{1}) v'(x_{1}) \bigg( \frac{2r\sigma^{2}}{\mu} - \mu\frac {2r\sigma^{2}}{\mu^{2} + 2r\sigma^{2}}\frac{\mu^{2}+2r\sigma^{2}}{\mu^{2}} \bigg)\\ &= 0, \end{aligned} $$

which is a contradiction. □

To complete the characterization of the shareholders’ value when the spread is high, we have to study the optimal policy when (6.17) is not fulfilled. We expect that \(a=0\) in that case, which means that for all \(x\), the manager should invest all the cash in the productive asset. Thus we are interested in the solutions to

$$ \mu\beta(x)w'(x) + \frac{\sigma^{2}\beta(x)^{2}}{2}w''(x) - r w(x) = 0 $$
(6.21)

such that \(w(0)=0\).

Proposition 6.7

Suppose that the functions \(x \mapsto\frac{x}{\beta(x)}\) and \(x \mapsto\frac{x^{2}}{\beta(x)^{2}}\) are analytic in 0 with a radius of convergence \(R\). Then the solutions \(w\) to (6.21) such that \(w(0)=0\) are given by

$$w(x) = \sum_{k=0}^{\infty}A_{k} x^{k+y_{1}} $$

with

$$A_{k} = \frac{1}{-I(k+y_{1})}\sum_{j=0}^{k-1}\frac{(j+y_{1})p^{(k-j)}(0) + q^{(k-j)}(0)}{(k-j)!}A_{j}, \quad\textit{for all}\ k \geq1, $$

where the functions \(p\) and \(q\) are

$$\left\{ \begin{aligned} p(x) &= \frac{2\mu x}{\sigma^{2}\beta(x)},\\ q(x) &= - \frac{2r x^{2}}{\sigma^{2} \beta(x)^{2}}, \end{aligned} \right. $$

the function \(I\) is given by

$$I(y) = \mu\beta'(0)y+\frac{\sigma^{2}}{2}\beta'(0)^{2}y(y-1)-r $$

and \(y_{1}\) is the positive root of \(I\) given by

$$y_{1} = \frac{-\mu+ \frac{\sigma^{2}}{2}\beta'(0) + \sqrt{(\mu- \frac {\sigma^{2}}{2}\beta'(0))^{2} + 2r \sigma^{2}}}{\sigma^{2}\beta'(0)}. $$

The radius of convergence of \(w\) is at least equal to \(R\).

Proof

This result is given by the Fuchs’ theorem [1, Sect. 9.5]. □

Note that choosing \(A_{0} > 0\) is enough to characterize a unique solution \(w\) of (6.21) because the \((A_{k})_{k\geq1}\) are given by a recurrence relation. Moreover, because \(\mu\beta'(0) \geq r\), we have \(y_{1} < 1\). As a consequence, if we choose \(A_{0}>0\), we have

$$\lim_{x \rightarrow0} w'(x) = +\infty\quad\mbox{ and }\quad\lim_{x \rightarrow0} w^{\prime\prime}(x) = -\infty. $$

Thus, proceeding analogously as in Lemma 6.3, we can prove the existence of \(b\) such that \(w''(b) = 0\) for all \(A_{0} > 0\). Now, observe that \(b\) does not depend on \(A_{0}\) so that we can choose \(A_{0} = A^{*}\) in order to have \(w'(b) = 1\). Hence, we have built a concave solution \(w^{*}\) to (6.21) with \(w^{*}(0) = 0\), \((w^{*})'(b) = 1\) and \((w^{*})''(b) = 0\). We extend \(w^{*}\) linearly on \([b,\infty)\) as usual to obtain a \(C^{2}\) function on \([0,\infty)\).

Proposition 6.8

The shareholders’ value is \(w^{*}\).

Proof

It suffices to check that \(w^{*}\) satisfies the free boundary problem (6.20). By construction, \(w^{*}\) is a \(C^{2}\) concave function on \((0,\infty)\). Because \((w^{*})'(b) = 1\), we have

$$(w^{*})'(x) \geq1,\quad\forall x \in(0,b] $$

and

$$(w^{*})'(x) = 1, \quad\forall x \geq b. $$

On \([b, \infty)\), we have

$$\begin{aligned} \max_{k\geq0}\mathcal{L}_{k} w^{*}(x) = \max_{k \geq0}\Big(&\mu\beta (k) - \alpha\big((k-x)^{+}\big) - \mu\beta(b) + r (b - x)\Big)\\ =\max\Big\{ &\max_{k\leq x}\mu\beta(k) - \mu\beta(b) + r (b - x),\\ &\max_{k\geq x}\big(\mu\beta(k) - \alpha(k-x) \big)- \mu\beta (b) + r (b - x)\Big\} . \end{aligned} $$

Using that \(\beta\) is concave increasing, \(\alpha\) is convex and \(\alpha '(0+)> \mu\beta'(0+)\), we have

$$\max_{k\geq0}\mathcal{L}_{k} w^{*}(x) =\mu\beta(x) - \mu\beta(b) + r (b - x). $$

Then using the concavity of \(\beta\),

$$\max_{k\geq0}\mathcal{L}_{k}w^{*}(x) \leq0 \quad\text{for all } x \geq b. $$

It remains to show that for every \(x< b\)

$$\max_{k\geq0}\mathcal{L}_{k}w^{*}(x) = 0. $$

Using that \(\beta\) is concave, \(\alpha\) is convex, \(\alpha'(0)> \mu \beta '(0)\) and \(w^{*}\) is concave increasing, we have for all \(k>x\) that

$$\frac{\partial}{\partial k}(\mathcal{L}_{k} w^{*})(x) = \big(\mu\beta'(k) - \alpha'(k-x)\big)(w^{*})'(x) + \sigma^{2}\beta'(k)\beta(k)(w^{*})''(x) \leq0. $$

Thus,

$$\max_{k\geq0}\mathcal{L}_{k} w^{*} (x) = \max_{0\leq k\leq x}\mathcal{L}_{k} w^{*} (x). $$

Moreover, for \(0 < k < x\),

$$\begin{aligned} \frac{\partial}{\partial k}(\mathcal{L}_{k} w^{*})(x) &= \mu\beta'(k) (w^{*})'(x) + \sigma^{2}\beta'(k)\beta(k)(w^{*})''(x)\\ &= \beta'(k)\big(\mu(w^{*})'(x) + \sigma^{2}\beta(k)(w^{*})''(x)\big). \end{aligned} $$

We expect for all \(x \in(0,b]\) and all \(k \leq x\) that

$$\frac{\partial}{\partial k}(\mathcal{L}_{k} w^{*} )(x) \geq0. $$

Notice that \(\beta'(k) \geq0\) and

$$\min_{0 \leq k \leq x} \mu(w^{*})'(x) + \sigma^{2}\beta(k)(w^{*})''(x) = \mu (w^{*})'(x) + \sigma^{2}\beta(x)(w^{*})''(x), $$

because \((w^{*})''(x) \leq0\) and \(\beta\) is increasing. Thus it is enough to prove that for every \(x< b\),

$$\mu(w^{*})'(x) + \sigma^{2}\beta(x)(w^{*})''(x) \geq0, $$

or equivalently, using \(\beta\geq0\),

$$\phi(x) = \mu\beta(x) (w^{*})'(x) + \sigma^{2}\beta(x)^{2} (w^{*})''(x) \geq0 \quad\text{for } x< b. $$

We make a proof by contradiction, assuming the existence of \(x\) such that \(\phi(x) < 0\). In a neighbourhood of 0, we have

$$(w^{*})'(x) \sim A^{*} y_{1}x^{y_{1} -1} $$

and

$$(w^{*})''(x) \sim A^{*} y_{1}(y_{1}-1)x^{y_{1}-2}. $$

From this we deduce, because \(\beta(x)x^{y_{1}-1}\le\beta'(0)x^{y_{1}}\), that

$$\begin{aligned} \lim_{x\rightarrow0} \beta(x)(w^{*})'(x) &= 0,\\ \lim_{x\rightarrow0} \beta(x)^{2}(w^{*})''(x) &= 0, \end{aligned}$$

yielding

$$\lim_{x\rightarrow0} \phi(x) = 0. $$

But \(\phi(b) > 0\); thus there is \(x_{1} \in(0,b)\) such that

$$\left\{ \begin{aligned} \phi(x_{1}) &< 0,\\ \phi'(x_{1}) &= 0. \end{aligned} \right. $$

To get the expression of \(\phi'\), we differentiate (6.21) to obtain

$$\phi'(x) = \big(2 r - \mu\beta'(x)\big) (w^{*})'(x) - \mu\beta(x) (w^{*})''(x). $$

Then using that \(\phi''(x_{1}) = 0\), we get that

$$\big(2 r - \mu\beta'(x)\big) (w^{*})'(x_{1}) = \mu\beta(x_{1}) (w^{*})''(x_{1}), $$

from which we deduce

$$\begin{aligned} \phi(x_{1}) &= \mu\beta(x_{1}) (w^{*})'(x_{1}) + \sigma^{2}\beta (x_{1})^{2}(w^{*})''(x_{1}) \\ &= \mu\beta(x_{1}) (w^{*})'(x_{1}) + \frac{\sigma^{2}\beta(x_{1})}{\mu} \big(2 r - \mu\beta'(x_{1})\big)(w^{*})'(x_{1}) \\ &= \beta(x_{1}) (w^{*})'(x_{1}) \bigg( \mu+ \frac{2r\sigma^{2}}{\mu} - \sigma ^{2} \beta'(x_{1}) \bigg). \end{aligned} $$

Now, remember that \(x_{1} > 0\) and thus, by the concavity of \(\beta\), we have \(\beta'(x_{1}) \leq\beta'(0)\). Furthermore, \(\beta'(0) \leq\frac{\mu^{2}+2r \sigma^{2}}{\sigma^{2}\mu}\) when (5.5) is not fulfilled. Hence,

$$\phi(x_{1}) \geq\beta(x_{1}) (w^{*})'(x_{1}) \bigg(\mu+ \frac{2r\sigma ^{2}}{\mu } - \frac{\mu^{2} + 2r \sigma^{2}}{\mu}\bigg) \geq0 $$

which yields a contradiction and ends the proof. □

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Pierre, E., Villeneuve, S. & Warin, X. Liquidity management with decreasing returns to scale and secured credit line. Finance Stoch 20, 809–854 (2016). https://doi.org/10.1007/s00780-016-0312-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00780-016-0312-4

Keywords

Mathematics Subject Classification (2010)

JEL Classification

Navigation