Preprint

Article

Altmetrics

Downloads

70

Views

15

Comments

0

A peer-reviewed article of this preprint also exists.

This version is not peer-reviewed

Positive recurrence for a single-server queueing system is established under generalised intensity conditions of service which does not assume existence of the density distribution function of service but a certain integral type lower bound as a sufficient condition. Positive recurrence implies existence of the invariant distribution and a guaranteed slow convergence to it in the total variation metric.

Keywords:

Submitted:

23 September 2023

Posted:

25 September 2023

You are already at the latest version

Alerts

A peer-reviewed article of this preprint also exists.

This version is not peer-reviewed

Submitted:

23 September 2023

Posted:

25 September 2023

You are already at the latest version

Alerts

Positive recurrence for a single-server queueing system is established under generalised intensity conditions of service which does not assume existence of the density distribution function of service but a certain integral type lower bound as a sufficient condition. Positive recurrence implies existence of the invariant distribution and a guaranteed slow convergence to it in the total variation metric.

Keywords:

Subject: Computer Science and Mathematics - Probability and Statistics

The goal of this paper is to establish **positive recurrence** of the model ${M}_{n}/G/1/\infty $ under certain assumptions. The Foster condition will be used for this. Intensity of service is assumed only partially at zero (as a lower left derivative value at zero of the “integrated intensity”); in addition, an integral type condition on the “integrated intensity” over intervals of some length is assumed.

For the recent history of the topic see [1] – [7], [14] and many more; see also [15,18]. One of the reasons – although not the only one – why various versions of this system are so popular may be explained by the fact of their intrinsic links to important topics of mathematical insurance theory, see [3].

In this paper we return to the less involved single–server system ${M}_{n}/GI/1/\infty $ where the intensity of arrivals may only depend on the number of customers in the system, with the goal of reviewing conditions of its positive recurrence. An importance of this property may be commented, for example, by the publications [1,12] where the investigation of the model assumes that it is in the “steady state”, which is the synonym of stationarity. As is well-known, positive recurrence along with some mild mixing or coupling properties guarantees the existence of a stationary regime of the system. One particular aspect of this issue is how to achieve bounds without assuming existence of intensity of service in $M/GI/1$ model and, more generally, in Erlang – Sevastyanov’s type systems. Certain results in this direction were recently established in [19] for a slightly different model. Still, in [19] it is essential that the absolute continuous part of the distribution function F (in our notation) were non-degenerate; in the present paper this is not required and the approach is different.

Note that in such a model certain close results may be obtained by the methods of regenerative processes if it is assumed that the same distribution function F (see below) has enough number of moments. However, our conditions and methods are different. The main (moderate) hope of the author is that possibly this approach may also be useful in studying ergodic properties of Erlang – Sevastyanov type models, as it happened with the earlier results and approaches based on the intensity of service as in [15,18] successfully applied in [16]. The present paper is some initial attempt toward the programme of developing tools that could help attack the problem outlined recently in [17].

The paper consists of the introduction in Section 1, of the setting and main result in Section 2, of two simple auxiliary lemmata in Section 3 and of the proof of the main result in Section 4, and two simple examples in Section 5 for the comparison of sufficient conditions of theorem 1 with conditions in terms of the intensity of service in the case if the latter does exist.

The model is as follows. There is one server with an incoming flow of customers or jobs; this flow is Poissonian with intensity ${\lambda}_{n}$ where n is the number of customers in the system. Is the server is idle, it immediately starts the service of the customer who arrived, unless the queue of waiting customers is not empty; in the latter case it starts the service of one of them. If the server is busy, then the newly arrived customer goes to the queue where it waits until the server completes the earlier job(s). The buffer for the queue is unlimited (denumerable). The discipline of how the server chooses the next customer from the queue for serving is FIFO (“first in – first out”). All services are independent with the same distribution function F, and they are independent on the arrivals. A serve is a synonym of a completed job. It is assumed that the mean value $\int}_{0}^{\infty}xdF\left(x\right)={\int}_{0}^{\infty}(1-F\left(x\right)dx)=:{\mu}^{-1$ is finite. It is assumed that

$$\mathsf{\Lambda}:=\underset{n}{sup}{\lambda}_{n}<\infty .$$

The following state space is convenient for the description of the stochastic process which describes the model. It will be convenient to identify the zero state $\left\{0\right\}$ with a zero couple $\{0,0\}$. Then the state space of the process is a union,
and the process itself is described at all times by a two-dimensional vector ${X}_{t}=({n}_{t},{x}_{t})$, with $t\ge 0$, where ${n}_{t}=0,1,\dots $ stands for the number of the customers in the system including both in the server and in the queue; after the identification of $\left\{0\right\}$ with $\{0,0\}$ mentioned above, ${x}_{t}=0$ in the case of ${n}_{t}=0$ by definition; the second component ${x}_{t}$ stands for the elapsed time of the current service. It is assumed that the initial value ${X}_{0}$ is any pair of non-negative values, ${X}_{0}=(n,x)$, and the process evolves in time according to the provided description. By the construction, it is a Markov process in the state space $\mathcal{X}$.

$$\mathcal{X}=\left\{(0,0)\right\}\cup \bigcup _{n=1}^{\infty}\{(n,x):\phantom{\rule{0.277778em}{0ex}}x\ge 0\},$$

In some occasions it will be convenient to write ${n}_{t}=n\left({X}_{t}\right)$ for the first component of ${X}_{t}$ and ${x}_{t}=x\left({X}_{t}\right)$ for the second one. For any $X=(n,x)$ where $F\left(x\right)<1$, the “integrated intensity” of service
is defined by a Stiltjes integral
The integral ${\int}_{0}^{x}{(1-F\left(s\right))}^{-1}dF\left(s\right)$ is assumed to be finite for any $x\ge 0$, and a bit more, see the assumptions in the next subsection. For simplicity of setting and proofs, in order to avoid possible singularities it is assumed that

$$dH\left(x\right)={(1-F\left(x\right))}^{-1}dF\left(x\right)$$

$$H\left(x\right)={\int}_{0}^{x}{(1-F\left(s\right))}^{-1}dF\left(s\right).$$

$$\begin{array}{cc}\hfill \phantom{\rule{1.em}{0ex}}& F\left(0\right)=0,\phantom{\rule{1.em}{0ex}}\&\phantom{\rule{1.em}{0ex}}F\left(x\right)<1,\phantom{\rule{1.em}{0ex}}\forall \phantom{\rule{0.277778em}{0ex}}x\ge 0,\hfill \\ \hfill \phantom{\rule{1.em}{0ex}}& \phantom{\rule{56.9055pt}{0ex}}\mathrm{and}\hfill \\ \hfill \phantom{\rule{1.em}{0ex}}& \tilde{F}\left(1\right):=\underset{x\le 1}{\mathrm{inf}}(F(x+1)-F\left(x\right))>0.\hfill \end{array}$$

Recall that it is assumed that
Let us also assume that there exists a constant r such that
(so that $1/2>(1+\mathsf{\Lambda})/r$) where $\mathsf{\Lambda}$ was defined in (1), and such that
and, moreover,
Let us hightlight that the latter inequality is not supposed to hold for small values of $\Delta $ approaching zero, but only for $\Delta \in [1/2,1]$. The increase of the integral ${\int}_{x}^{x+\Delta}\phantom{\rule{-0.166667em}{0ex}}\phantom{\rule{-0.166667em}{0ex}}\phantom{\rule{-0.166667em}{0ex}}(1+s)dH\left(s\right)$ for a fixed x as $\Delta \uparrow 1$ may be achieved due to a positive intensity ${H}^{\prime}>0$ if this derivative exists, and due to jumps of H, and also due to the increase of this function on sets of the Lebesgue measure zero like for the Cantor function.

$$H\left(t\right)<\infty ,\phantom{\rule{1.em}{0ex}}\forall \phantom{\rule{0.277778em}{0ex}}t\ge 0.$$

$$r>2(1+\mathsf{\Lambda})$$

$$\forall \phantom{\rule{0.166667em}{0ex}}0\le t\le 1,\phantom{\rule{1.em}{0ex}}{\int}_{0}^{t}(1+s)dH\left(s\right)\ge rt,$$

$$\underset{x\ge 1}{\mathrm{inf}}{\int}_{x}^{x+\Delta}\phantom{\rule{-0.166667em}{0ex}}\phantom{\rule{-0.166667em}{0ex}}\phantom{\rule{-0.166667em}{0ex}}(1+s)dH\left(s\right)\ge r\Delta ,\phantom{\rule{1.em}{0ex}}\forall \phantom{\rule{0.277778em}{0ex}}\frac{1}{2}\le \Delta \le 1.$$

Note that the process ${X}_{t}$ may have no explosion on any bounded interval of time with probability one since the arrival intensities are bounded. In what follows $\parallel {\mu}_{1}-{\mu}_{2}{\parallel}_{TV}=2{sup}_{A}({\mu}_{1}\left(A\right)-{\mu}_{2}\left(A\right))$ is the distance in total variation between two probability measures; here the supremum is taken over all Borel measurable sets $A\in \mathcal{X}$.

Let assumptions (1) – (6) be satisfied. Then there exists $C>0$ such that
Also, there exists a unique stationary measure μ, and, moreover, there exists $C>0$ such that for any $t\ge 0$,
where ${\mu}_{t}^{n,x}$ is a marginal distribution of the process $({X}_{t},\phantom{\rule{0.166667em}{0ex}}t\ge 0)$ with the initial data $X=(n,x)$, and constant C is the same as in (7).

$${\mathsf{E}}_{n,x}{\tau}_{0}\le C(n+x+1).$$

$$\parallel {\mu}_{t}^{n,x}{-\mu \parallel}_{TV}\le C\phantom{\rule{0.166667em}{0ex}}\frac{(1+n+x)}{{(1+t)}^{}},$$

We highlight that the initial value of the second component here is arbitrary. This is not always the case for papers on the model $M/GI/1$, as some of them concentrate on studying the embedded process at times of jumps down where the second component necessarily vanishes.

Note that this theorem is not a pure existence type result because constant C may be evaluated, as can be seen from the proof.

NB: The following convention will be used, $dH\left(x\right)=0$ if $(n,x)=(0,0)$.

Under assumption (2), for any $T>0$ and for any $m\le n$,

$$\begin{array}{cc}\hfill \phantom{\rule{1.em}{0ex}}& {\mathsf{P}}_{n,0}\left(\mathit{no}\mathit{less}\mathit{than}m\mathit{jobs}\mathit{completed}\mathit{on}[0,T]\right)\le F{\left(T\right)}^{m},\hfill \\ \hfill \\ \hfill \phantom{\rule{1.em}{0ex}}& {\mathsf{P}}_{n,x}\left(\mathit{no}\mathit{less}\mathit{than}m\mathit{jobs}\mathit{completed}\mathit{on}[0,T]\right)\le F{\left(T\right)}^{m-1}.\hfill \end{array}$$

The probability of no less than m completed jobs over time T for $m\le n$ is given by the repeated integral
Hence, we obtain both inequalities in (9), as required. Note that assumption (2) implies that ${sup}_{x\ge 0}(F(x+T)-F\left(x\right))<1$ for any $T>0$, otherwise the distribution $dF$ would be concentrated on a finite interval, which would contradict the assumption. □

$$\begin{array}{cc}\hfill \phantom{\rule{1.em}{0ex}}& {\mathsf{P}}_{n,x}\left(\mathrm{no}\mathrm{less}\mathrm{than}m\mathrm{jobs}\mathrm{completed}\mathrm{over}\mathrm{time}T\right)\hfill \\ \\ \hfill \phantom{\rule{1.em}{0ex}}& ={\int}_{0}^{T}dF(x+{t}_{1}){\int}_{0}^{T-{t}_{1}}dF\left({t}_{2}\right)\dots {\int}_{0}^{T-{t}_{1}-\dots -{t}_{m-2}}dF\left({t}_{m-1}\right){\int}_{0}^{T-{t}_{1}-\dots -{t}_{m-1}}dF\left({t}_{m}\right)\hfill \\ \\ \hfill \phantom{\rule{1.em}{0ex}}& \le (F(x+T)-F\left(x\right))F{\left(T\right)}^{m-1}.\hfill \end{array}$$

Recall the notation $\mathsf{\Lambda}={sup}_{n}{\lambda}_{n}$ and assumption (1).

Under assumption (2), for any n

$$\underset{x\le 1}{\mathrm{inf}}{\mathsf{P}}_{n,x}({\tau}_{0}\le n)\ge {p}^{n}exp(-n\mathsf{\Lambda})>0,\phantom{\rule{1.em}{0ex}}withp=\tilde{F}\left(1\right).$$

Firstly, for any $n=0,1,\dots $, and any x,
Secondly, with probability no less than $p={\mathrm{inf}}_{x\le 1}(F(x+1)-F\left(x\right)$ there is at least one completed job on $[0,1]$. The value p is positive according to assumption (2). After each jump down on $[0,1]$ – which is a stopping time – this argument may be repeated by induction n times. Note that since $(n,x)$ is the initial value of the process, the server may not become idle until n jobs are completed. So, this gives the lower bound
By the multiplication of the two values $exp(-n\mathsf{\Lambda})$ and ${p}^{n}$, due to the independence of the services and arrivals, the proof is completed. □

$${\mathsf{P}}_{n,x}\left(\mathrm{there}\mathrm{are}\mathrm{no}\mathrm{arrivals}\mathrm{on}[0,n]\right)\ge exp(-n\mathsf{\Lambda}).$$

$$\underset{x\le 1}{\mathrm{inf}}{\mathsf{P}}_{n,x}\left(\mathrm{at}\mathrm{least}n\mathrm{jobs}\mathrm{completed}\mathrm{on}[0,n]\right)\ge {p}^{n}.$$

$$\epsilon <\frac{1}{2}-\frac{(1+\mathsf{\Lambda})}{r}$$

Once $\epsilon $ is chosen, let
(see (2) for the definition of $\tilde{F}\left(1\right)$) and let us choose $x\left(\epsilon \right)\ge 2$ such that
Since $F\left(x\right)\uparrow 1$ as $x\uparrow \infty $, this is possible for any $\epsilon >0$. Let us introduce an auxiliary stopping time
Denote
Function L will serve as a Lyapunov function outside the compact set ${K}_{\epsilon}$.

$$M={M}_{\epsilon}:=\left(\right)open="\lceil "\; close="\rceil ">\frac{|ln\epsilon |}{|ln\tilde{F}\left(1\right)|}$$

$$\underset{x\ge x\left(\epsilon \right)}{sup}(F(x+1)-F\left(x\right))\le \epsilon .$$

$$\tau ={\tau}_{\epsilon}:=\mathrm{inf}(t=0,1,\dots :\phantom{\rule{0.277778em}{0ex}}{n}_{t}\le {M}_{\epsilon}\phantom{\rule{0.277778em}{0ex}}\&\phantom{\rule{0.277778em}{0ex}}{x}_{t}\le x\left(\epsilon \right))\phantom{\rule{1.em}{0ex}}(\mathrm{inf}\varnothing =\infty ).$$

$${K}_{\epsilon}=((n,x):n\le {M}_{\epsilon}\phantom{\rule{0.277778em}{0ex}}\&\phantom{\rule{0.277778em}{0ex}}x\le x\left(\epsilon \right))\phantom{\rule{1.em}{0ex}}\&\phantom{\rule{1.em}{0ex}}L(n,x)=n+x+1.$$

First of all, we are going to estimate the first moment of $\tau $, namely, to prove that there exists $C>0$ such that
Recall that
and highlight that the definition of $\tau ={\tau}_{\epsilon}$ is quite different from that of ${\tau}_{0}$.

$${\mathsf{E}}_{n,x}\tau \le CL\left(X\right),\phantom{\rule{1.em}{0ex}}X=(n,x).$$

$${\tau}_{0}:=\mathrm{inf}(t\ge 0:\phantom{\rule{0.166667em}{0ex}}{X}_{t}=(0,0)),$$

Let ${X}_{0}=X$. We have for ${X}_{t}=({n}_{t},{x}_{t})\ne (0,0)$,
where ${M}_{t}$ is a local martingale (see, e.g., [13]), and
It is assumed that ${J}^{1}\left(0\right)={J}^{2}\left(0\right)={J}^{3}\left(0\right)=0$ by definition. The martingale ${M}_{t}$ is, in fact, a “normal”, that is, non-local one because due to the assumptions expectations of all terms in the integral version of (16) are finite. The following bound will be established:
with some $C>0$, if $(n,x)\notin {K}_{\epsilon}$.

$$\begin{array}{cc}\hfill \phantom{\rule{1.em}{0ex}}& d{L}_{}\left({X}_{t}\right)={\lambda}_{{n}_{t}}\phantom{\rule{0.166667em}{0ex}}\left(\right)open="("\; close=")">{({n}_{t}+2+{x}_{t})}^{}-{({n}_{t}+1+{x}_{t})}^{}\phantom{\rule{0.166667em}{0ex}}dt\hfill \end{array}\hfill \\ \hfill \phantom{\rule{1.em}{0ex}}& +1({n}_{t}0)\left(\right)open="("\; close=")">{n}_{t}-{({n}_{t}+1+{x}_{t})}^{}dH\left({x}_{t}\right)\hfill $$

$$\begin{array}{cc}\hfill \phantom{\rule{1.em}{0ex}}& d{J}^{1}={I}_{1}\phantom{\rule{0.166667em}{0ex}}dt:={\lambda}_{{n}_{t}}dt\phantom{\rule{0.166667em}{0ex}}\left(\right)open="("\; close=")">{({n}_{t}+2+{x}_{t})}^{}-{({n}_{t}+1+{x}_{t})}^{}={\lambda}_{{n}_{t}}dt,\hfill \end{array}\\ \hfill \phantom{\rule{1.em}{0ex}}& d{J}^{3}:=1(n0)\left(\right)open="("\; close=")">{({n}_{t}+1+{x}_{t})}^{}-{n}_{t}\phantom{\rule{0.166667em}{0ex}}dH\left({x}_{t}\right)=1({n}_{t}0)(1+{x}_{t})dH\left({x}_{t}\right).\hfill $$

$${\mathsf{E}}_{n,x}({J}^{1}\left(1\right)+{J}^{2}\left(1\right)-{J}^{3}\left(1\right))\le -C,$$

Firstly, we have,

$${J}^{1}\left(1\right)+{J}^{2}\left(1\right)={\int}_{0}^{1}(1+{\lambda}_{{n}_{t}})dt\le 1+\mathsf{\Lambda}.$$

In order to evaluate ${J}^{3}$ let us introduce a sequence of stopping times. Let
To evaluate ${J}^{3}$, using the identity $1(t<\gamma )1({n}_{t}>0)=1(t<\gamma )$ which holds provided that $n>0$, let us estimate,
Let us estimate the term ${F}^{1}$. We have (recall that $\gamma \le 1$ by definition),
Here the complementary part of the integral ${\mathsf{E}}_{n,x}1(\gamma <1/2){\int}_{0}^{\gamma}(1+x+t)dH(x+t)\ge 0$ was just dropped.

$$\gamma ={\gamma}_{n,x}:=\mathrm{inf}(0\le t\le 1:{x}_{t}=0)\phantom{\rule{1.em}{0ex}}(\mathrm{inf}(\varnothing )=1).$$

$$\begin{array}{cc}\hfill \phantom{\rule{1.em}{0ex}}& {\mathsf{E}}_{n,x}{J}_{n,x}^{3}\left(1\right)={\mathsf{E}}_{n,x}{\int}_{0}^{1}1({n}_{t}>0)(1+{x}_{t})dH\left({x}_{t}\right)\hfill \\ \hfill \\ \hfill \phantom{\rule{1.em}{0ex}}& ={\mathsf{E}}_{n,x}{\int}_{0}^{\gamma}\stackrel{=1}{\overbrace{1({n}_{t}>0)}}(1+{x}_{t})dH\left({x}_{t}\right)+{\mathsf{E}}_{n,x}{\int}_{\gamma}^{1}1({n}_{t}>0)(1+{x}_{t})dH\left({x}_{t}\right)\hfill \\ \hfill \\ \hfill \phantom{\rule{1.em}{0ex}}& ={\mathsf{E}}_{n,x}{\int}_{0}^{\gamma}(1+x+t)dH(x+t)+{\mathsf{E}}_{n,x}1(\underset{0\le t\le 1}{\mathrm{inf}}{n}_{t}>0){\int}_{\gamma}^{1}(1+{x}_{t})dH\left({x}_{t}\right)\hfill \\ \hfill \\ \hfill \phantom{\rule{1.em}{0ex}}& +\underset{\ge 0}{\underbrace{{\mathsf{E}}_{n,x}1({\mathrm{inf}}_{0\le t\le 1}{n}_{t}=0){\int}_{\gamma}^{1}1({n}_{t}>0)(1+{x}_{t})dH\left({x}_{t}\right)}}\hfill \\ \hfill \\ \hfill \phantom{\rule{1.em}{0ex}}& \ge {\mathsf{E}}_{n,x}{\int}_{0}^{\gamma}(1+x+t)dH(x+t)\hfill \\ \hfill \\ \hfill \phantom{\rule{1.em}{0ex}}& +{\mathsf{E}}_{n,x}1(\underset{0\le t\le 1}{\mathrm{inf}}{n}_{t}>0){\int}_{\gamma}^{1}(1+{x}_{t})dH\left({x}_{t}\right)=:{F}^{1}+{F}^{2}.\hfill \end{array}$$

$$\begin{array}{cc}\hfill \phantom{\rule{1.em}{0ex}}& {F}^{1}={\mathsf{E}}_{n,x}{\int}_{0}^{\gamma}(1+x+t)dH(x+t)\hfill \\ \\ \hfill \phantom{\rule{1.em}{0ex}}& \ge {\mathsf{E}}_{n,x}1(\gamma \ge 1/2)\underset{\ge r\gamma}{\underbrace{{\int}_{0}^{\gamma}(1+x+t)dH(x+t)}}\ge r{\mathsf{E}}_{n,x}1(\gamma \ge 1/2)\gamma .\hfill \end{array}$$

Further, it will be shown that
For this aim, let us introduce by induction the sequence of stopping times,
Note that the component ${x}_{t}$ might only have finitely many jumps down on any finite interval. The times of jumps on the interval $[0,1]$ are exactly the times ${\gamma}^{n}<1$, and possibly the last jump down on this interval may or may not occur at $t=1$. In any case, clearly,
So, we have,
where for each outcome this series is almost surely a finite sum. On each interval $[{\gamma}^{n},{\gamma}^{n+1}]$ we may write down
This is by virtue of assumption (5) and because at each stopping time ${\gamma}^{n}$ which is less than 1 we have ${x}_{{\gamma}^{n}}=0$. If ${\gamma}^{n}\ge 1$, then both sides in the latter inequality equal zero, so that the inequality still holds true. Therefore, taking a sum over n, we obtain (21), just without $1({\mathrm{inf}}_{0\le t\le 1}{n}_{t}>0)$ in the left hand side. This miltiplier $1({\mathrm{inf}}_{0\le t\le 1}{n}_{t}>0)$ in the right hand side guarantees that its presence in the left hand side of (21) still leads to a valid inequality, which means that the bound (21) is justified.

$${F}^{2}\ge r{\mathsf{E}}_{n,x}(1-\gamma )1(\underset{0\le t\le 1}{\mathrm{inf}}{n}_{t}>0).$$

$${\gamma}^{1}=\gamma ,\phantom{\rule{1.em}{0ex}}{\gamma}^{n+1}:=\mathrm{inf}({\gamma}^{n}<t\le 1:{x}_{{\gamma}^{n}}=0),\phantom{\rule{1.em}{0ex}}n\ge 1\phantom{\rule{1.em}{0ex}}(\mathrm{inf}(\varnothing )=1).$$

$$\underset{n\to \infty}{lim}{\gamma}^{n}=1.$$

$${\int}_{\gamma}^{1}(1+{x}_{t})dH\left({x}_{t}\right)=\sum _{n=1}^{\infty}{\int}_{{\gamma}^{n}}^{{\gamma}^{n+1}}(1+{x}_{t})dH\left({x}_{t}\right),$$

$$\begin{array}{cc}\hfill \phantom{\rule{1.em}{0ex}}& {\int}_{{\gamma}^{n}}^{{\gamma}^{n+1}}(1+{x}_{t})dH\left({x}_{t}\right)={\int}_{0}^{{\gamma}^{n+1}-{\gamma}^{n}}(1+{x}_{{\gamma}^{n}}+s)dH({x}_{{\gamma}^{n}}+s)\hfill \\ \\ \hfill \phantom{\rule{1.em}{0ex}}& ={\int}_{0}^{{\gamma}^{n+1}-{\gamma}^{n}}1({n}_{{x}_{{\gamma}^{n}}+s}>0)(1+{x}_{{\gamma}^{n}}+s)dH({x}_{{\gamma}^{n}}+s)\hfill \\ \\ \hfill \phantom{\rule{1.em}{0ex}}& ={\int}_{0}^{{\gamma}^{n+1}-{\gamma}^{n}}1({n}_{s}>0)(1+s)dH\left(s\right)\ge r({\gamma}^{n+1}-{\gamma}^{n})1(\underset{0\le t\le 1}{\mathrm{inf}}{n}_{t}>0).\hfill \end{array}$$

It follows from (20) and (21) that
Due to lemma 1, if $n>M$ then (see (12))
(NB: In fact, at least n jobs should be completed, the first one on $[x,x+1]$; however, we prefer to have a bound independent of x. In any case, this does not change the scheme of the proof.) Likewise, if $x>x\left(\epsilon \right)$, then
due to the choise of $x\left(\epsilon \right)$, see (13).

$$\begin{array}{cc}\hfill \phantom{\rule{1.em}{0ex}}& {J}^{3}={F}^{1}+{F}^{2}\ge r{\mathsf{E}}_{n,x}1(\gamma \ge 1/2)\gamma +r{\mathsf{E}}_{n,x}(1-\gamma )1(\underset{0\le t\le 1}{\mathrm{inf}}{n}_{t}>0)\hfill \\ \\ \hfill \phantom{\rule{1.em}{0ex}}& =r{\mathsf{E}}_{n,x}1(\gamma \ge 1/2)\gamma +r{\mathsf{E}}_{n,x}(1-\gamma )-r{\mathsf{E}}_{n,x}(1-\gamma )1(\underset{0\le t\le 1}{\mathrm{inf}}{n}_{t}=0)\hfill \\ \\ \hfill \phantom{\rule{1.em}{0ex}}& \ge r-r{\mathsf{E}}_{n,x}1(\gamma <1/2)\gamma -r{\mathsf{E}}_{n,x}1(\underset{0\le t\le 1}{\mathrm{inf}}{n}_{t}=0)\ge \frac{r}{2}-r{\mathsf{P}}_{n,x}(\underset{0\le t\le 1}{\mathrm{inf}}{n}_{t}=0).\hfill \end{array}$$

$${\mathsf{E}}_{n,x}1(\underset{0\le t\le 1}{\mathrm{inf}}{n}_{t}\phantom{\rule{-0.166667em}{0ex}}=\phantom{\rule{-0.166667em}{0ex}}0)\phantom{\rule{-0.166667em}{0ex}}=\phantom{\rule{-0.166667em}{0ex}}{\mathsf{P}}_{n,x}\left(\mathrm{at}\mathrm{less}n\phantom{\rule{-0.166667em}{0ex}}-\phantom{\rule{-0.166667em}{0ex}}1\mathrm{jobs}\mathrm{completed}\mathrm{on}[0,1]\right)\phantom{\rule{-0.166667em}{0ex}}\le \phantom{\rule{-0.166667em}{0ex}}F{\left(1\right)}^{n-1}\phantom{\rule{-0.166667em}{0ex}}\le \phantom{\rule{-0.166667em}{0ex}}\epsilon .$$

$$\begin{array}{cc}\hfill \phantom{\rule{1.em}{0ex}}& {\mathsf{E}}_{n,x}1(\underset{0\le t\le 1}{\mathrm{inf}}{n}_{t}=0)\le {\mathsf{P}}_{n,x}\left(\mathrm{at}\mathrm{least}1\mathrm{jobs}\mathrm{completed}\mathrm{on}[0,1]\right)\hfill \\ \\ \hfill \phantom{\rule{1.em}{0ex}}& \le F(x+1)-F\left(x\right)\le \epsilon ,\hfill \end{array}$$

Recall that $\epsilon $ was chosen so that $r\epsilon <(r/2)-(1+\mathsf{\Lambda})$, see (11). Hence,
Denote
Thus, for any $(n,x)\notin {K}_{\epsilon}$ we obtain,
The bound (17) follows with a constant C which may be evaluated via $r,\mathsf{\Lambda},\epsilon $.

$$\begin{array}{c}\hfill {J}^{3}\ge \frac{r}{2}-r\epsilon .\end{array}$$

$$(0<)\phantom{\rule{0.277778em}{0ex}}\Delta :=\frac{r}{2}-\epsilon -1-\mathsf{\Lambda}.$$

$$\begin{array}{c}\hfill {\mathsf{E}}_{n,x}({J}_{n,x}^{3}\left(1\right)-{J}^{1}\left(1\right)-{J}^{2}\left(1\right))\ge \Delta .\end{array}$$

$$1((n,x)\notin {K}_{\epsilon}){\mathsf{E}}_{n,x}L\left({X}_{1}\right)\le 1((n,x)\notin {K}_{\epsilon})(L\left({X}_{0}\right)-\Delta ).$$

$$\overline{)1(0<\tau ){\mathsf{E}}_{n,x}L\left({X}_{1}\right)\le 1(0<\tau )(L\left({X}_{0}\right)-\Delta ).}$$

Similarly, $(({n}_{1},{x}_{1})\notin {K}_{\epsilon})$ may be equivalently expressed as $(\tau >1)$. So, we get
Hence, by taking expectations we obtain,

$$1(1<\tau ){\mathsf{E}}_{{n}_{1},{x}_{1}}L\left({X}_{2}\right)\le 1(1<\tau )(L\left({X}_{1}\right)-\Delta ).$$

$${\mathsf{E}}_{n,x}1(1<\tau ){\mathsf{E}}_{{n}_{1},{x}_{1}}L\left({X}_{2}\right)\le {\mathsf{E}}_{n,x}1(1<\tau )(L\left({X}_{1}\right)-\Delta ).$$

Similarly, by induction (in what follows the notation ${n}_{k}={n}_{t}{|}_{t=k}$ is used),
Due to the elementary bound $1(k-1<\tau )\ge 1(k<\tau )$, this implies,
Summing up and dropping the negative term in the right hand side, we obtain
for any $N>0$. So, by the monotone convergence theorem,
By virtue of the well-known relation
for the expectation of $\tau $ the bound (23) implies the following inequality with ${X}_{0}=(n,x)$,
In particular, this bound signifies that
**3.** Now, once the bound for the expected value of $\tau $ is established, we are ready to explain the details how to get a bound for ${\mathsf{E}}_{n,x}{\tau}_{0}$. The rest of the proof is devoted to this implication, with the last sentences related to the corollary about the invariant measure and convergence to it.

$${\mathsf{E}}_{n,x}1(k-1<\tau ){\mathsf{E}}_{{n}_{k-1},{x}_{k-1}}L\left({X}_{k}\right)\le {\mathsf{E}}_{n,x}1(k-1<\tau )(L\left({X}_{k-1}\right)-\Delta ),\phantom{\rule{0.277778em}{0ex}}k\ge 1.$$

$$\overline{)\begin{array}{c}\hfill \Delta \times {\mathsf{E}}_{n,x}1(k-1<\tau )\le {\mathsf{E}}_{n,x}1(k-1<\tau )L\left({X}_{k-1}\right)-{\mathsf{E}}_{n,x}1(k<\tau )L\left({X}_{k}\right).\end{array}}$$

$$\begin{array}{c}\hfill \Delta \phantom{\rule{0.166667em}{0ex}}\sum _{k=1}^{N}{\mathsf{E}}_{n,x}1(k-1<\tau )\le 1((n,x)\notin {K}_{\epsilon})L\left({X}_{0}\right),\end{array}$$

$$\begin{array}{c}\hfill \Delta \phantom{\rule{0.166667em}{0ex}}1((n,x)\notin {K}_{\epsilon})\sum _{k=1}^{\infty}{\mathsf{E}}_{n,x}1(k-1<\tau )\le 1((n,x)\notin {K}_{\epsilon})L\left({X}_{0}\right).\end{array}$$

$$\sum _{k=1}^{\infty}{\mathsf{E}}_{n,x}1(k-1<\tau )={\mathsf{E}}_{n,x}\sum _{k=1}^{\infty}1(k\le \tau )={\mathsf{E}}_{n,x}\sum _{k=1}^{\tau}1={\mathsf{E}}_{n,x}\tau ,$$

$$\overline{)1((n,x)\notin {K}_{\epsilon}){\mathsf{E}}_{n,x}\tau \le {\Delta}^{-1}1((n,x)\notin {K}_{\epsilon})L\left({X}_{0}\right).}$$

$${\mathsf{P}}_{n,x}(\tau <\infty )=1.$$

At $\tau $, the process ${X}_{k}$ attains the set $(X:X\in {K}_{\epsilon})$, while ${X}_{0}\notin {K}_{\epsilon}$; hence, both
By definition, random variable $\tau $ is the first integer k where simultaneously
Therefore, at $k-1$ we have either ${n}_{k}>M$, or ${x}_{k}>x\left(\epsilon \right)$, or both. If there are no completed jobs on $[k-1,k]$, then ${n}_{t}$ may only increase, or, at least, stay equal on this interval, while ${x}_{t}$ certainly increases. Therefore, $\tau =k$ may only be achieved by at least one completed job; it would mean a jump down by one of the n-component and simultaneously a jump down to zero of the x-component. Then at k we obtain ${x}_{k}\le 1$, which certainly makes it less than $x\left(\epsilon \right)$, irrespectively of whether or not there were other arrivals or completed jobs on $[k-1,k]$ (recall that in addition to inequality (13) we assumed that $x\left(\epsilon \right)\ge 2$).

$${n}_{\tau}\le M\phantom{\rule{1.em}{0ex}}\&\phantom{\rule{1.em}{0ex}}{x}_{\tau}\le x\left(\epsilon \right).$$

$${n}_{k}\le M\phantom{\rule{1.em}{0ex}}\&\phantom{\rule{1.em}{0ex}}{x}_{k}\le x\left(\epsilon \right).$$

Now, given $n\le M$ and $x\le x\left(\epsilon \right)$, by virtue of lemma 2, for any $T>0$ we have with any $x\le 1$,
where
Here T is any positive integer and
Note that, of course, for non-integer values of $T>0$ there is a likewise bound, but it looks a bit more involved, and using integers T values suffices for the proof. Recall that it was assumed in (2) that $\tilde{F}\left(1\right)>0$, and it follows from the first line of (2) that $\tilde{F}\left(1\right)<1$. Inequality (25) implies that
NB: Here the standard notations for homogeneous Markov processes are used, which means that ${X}_{T}$ after stopping time τ is, actually, the value ${X}_{\tau +T}$.

$${\mathsf{P}}_{n,x}(\mathrm{no}\mathrm{arrivals}\mathrm{on}[0,T],n\left({X}_{T}\right)=0)\ge p\left(T\right)0,$$

$$p\left(T\right):={p}^{T}exp(-T\mathsf{\Lambda})=\tilde{F}{\left(1\right)}^{T}exp(-T\mathsf{\Lambda}).$$

$$\tilde{F}\left(1\right)=\underset{x\le 1}{\mathrm{inf}}(F(x+1)-F\left(x\right)).$$

$$\underset{x\le 1}{\mathrm{inf}}{\mathsf{P}}_{{n}_{\tau},{x}_{\tau}}(\mathrm{no}\mathrm{arrivals}\mathrm{on}[0,T],n\left({X}_{T}\right)=0)\ge p\left(T\right)0.$$

Note that for any $T>0$ the event $(n\left({X}_{\tau +T}\right)=0)$ implies that
Hence, we conclude,
and, therefore, for any x,
**4.** Consider now the process X started at time $\tau $ from state $({n}_{\tau},{x}_{\tau})$ with ${x}_{\tau}\le 1$ and ${n}_{\tau}\le M$.

$${\tau}_{0}\le \tau +T.$$

$${\mathsf{P}}_{{n}_{\tau},{x}_{\tau}}(\mathrm{no}\mathrm{arrivals}\mathrm{on}[0,T],{\tau}_{0}\le T)\ge p\left(T\right)0.$$

$${\mathsf{P}}_{n,x}(\mathrm{no}\mathrm{arrivals}\mathrm{on}[\tau ,\tau +T],{\tau}_{0}\le \tau +T)\ge p\left(T\right)0.$$

Let $T:=\lceil x\left(\epsilon \right)\rceil $ and let us stop the process either at $\tau +T$, or at
whatever happens earlier. In other words, consider the stopping time
The event $({\chi}^{1}=\tau +T)$ implies that the process $L\left({X}_{t}\right)$ does not exceed the level $M+2+T$ on the interval $[\tau ,\tau +T]$. On the other hand, according to the arguments of step 3 we have,
Let
and further, let us define two sequences of stopping times by induction,
Note that all integers here in expressions like ${\chi}^{k}$ and ${\tau}^{k}$ stand for upper indices, not for power functions. Let us highlight that stopping time ${\tau}^{k+1}$ equals ${\chi}^{k}$ plus some integer, but ${\chi}^{k}$ may or may be not integer itself. All these stopping times are finite with probability one, and, moreover,
Also, almost surely
NB: If ${\mathrm{inf}}_{n}{\lambda}_{n}=0$, then, in general, we may not claim that ${lim}_{k\to \infty}{\tau}^{k}=\infty $, but it is not necessary for our aims.

$$\chi :=\mathrm{inf}(t\ge \tau :{n}_{t}\ge M+1,\phantom{\rule{0.277778em}{0ex}}or\phantom{\rule{0.277778em}{0ex}}{x}_{t}\ge x\left(\epsilon \right)+1),$$

$${\chi}^{1}:=\chi \wedge (\tau +T).$$

$${\mathsf{P}}_{{n}_{\tau},{x}_{\tau}}({\tau}_{0}\le {\chi}^{1})\ge p\left(T\right)>0.$$

$${\chi}^{0}:=0,\phantom{\rule{1.em}{0ex}}{\tau}^{1}:=\tau ,$$

$$\begin{array}{cc}\hfill \phantom{\rule{1.em}{0ex}}& {\chi}^{k+1}:=\mathrm{inf}(t\ge {\tau}^{k+1}:{n}_{t}\ge M+1,\phantom{\rule{0.277778em}{0ex}}or\phantom{\rule{0.277778em}{0ex}}{x}_{t}\ge x\left(\epsilon \right)+1)\wedge ({\tau}^{k+1}+T),\hfill \\ \\ \hfill \phantom{\rule{1.em}{0ex}}& {\tau}^{k+1}:=\mathrm{inf}({\chi}^{k}+i,\phantom{\rule{0.166667em}{0ex}}i\ge 0:\phantom{\rule{0.166667em}{0ex}}{n}_{{\chi}^{k}+i}>M,\phantom{\rule{0.277778em}{0ex}}or\phantom{\rule{0.277778em}{0ex}}{x}_{{\chi}^{k}+i}\ge x\left(\epsilon \right)+1),\phantom{\rule{1.em}{0ex}}k\ge 0.\hfill \end{array}$$

$$1({\tau}_{0}>{\tau}^{k}){\mathsf{P}}_{{X}_{{\tau}^{k}}}({\tau}_{0}\le {\chi}^{k+1})\ge 1({\tau}_{0}>{\tau}^{k})p\left(T\right).$$

$$\underset{k\to \infty}{lim\; inf}{\tau}^{k}\ge {\tau}_{0}.$$

Using strong Markov property at time ${\chi}^{i}$ (see [9]), we obtain by induction,
**5.** Denote
Also by induction it follows from (24) and from the elementary bound by definition
that there exist a constant C such that
Using the representation
and due to (30), we estimate,
Now we are going to estimate this sum by some geometric type series in a combination with bounds (31) and (32). Some little issue is that we are not able to use Hölder’s, or Cauchy – Buniakowskii – Schwarz’ inequality here because we only possess a first moment bound for ${\tau}^{k}$, while higher moments are not available. This minor obstacle is resolved in the next step of the proof by the following arguments using conditional expectation with respect to suitable sigma-algebras.

$${\mathsf{P}}_{{X}_{0}}({\tau}_{0}>{\chi}^{k})\le {(1-p\left(T\right))}^{k},\phantom{\rule{1.em}{0ex}}k\ge 1.$$

$${d}^{k}:={\chi}^{k}-{\tau}^{k},\phantom{\rule{1.em}{0ex}}{\delta}^{k}:={\tau}^{k+1}-{\chi}^{k}.$$

$${d}^{k}\le T,$$

$${\mathsf{E}}_{{X}_{0}}\left(\underset{=:{\delta}^{k}+{d}^{k}}{\underbrace{({\chi}^{k+1}-{\chi}^{k})}}\right|{X}_{{\chi}^{k}})\le C,\phantom{\rule{1.em}{0ex}}\forall \phantom{\rule{0.166667em}{0ex}}k\ge 1.$$

$$\begin{array}{c}\hfill {\chi}^{k+1}=\stackrel{={\chi}^{1}}{\overbrace{{\tau}^{1}+{d}^{1}}}+\sum _{i=2}^{k+1}({\delta}^{i-1}+{d}^{i}),\end{array}$$

$$\begin{array}{cc}\hfill {\mathsf{E}}_{{X}_{0}}{\tau}_{0}& =\sum _{k=0}^{\infty}{\mathsf{E}}_{{X}_{0}}{\tau}_{0}1({\chi}^{k}<{\tau}_{0}\le {\chi}^{k+1})\le \sum _{k=0}^{\infty}{\mathsf{E}}_{{X}_{0}}{\chi}^{k+1}1({\chi}^{k}<{\tau}_{0})1({\chi}^{k+1}\ge {\tau}_{0})\hfill \\ \\ \hfill \phantom{\rule{1.em}{0ex}}& =\sum _{k=0}^{\infty}{\mathsf{E}}_{{X}_{0}}\left(\right)open="("\; close=")">{\tau}^{1}+{d}^{1}+\sum _{i=2}^{k+1}({\delta}^{i-1}+{d}^{i})1({\chi}^{k}{\tau}_{0})1({\chi}^{k+1}\ge {\tau}_{0}).\hfill \end{array}$$

$$\begin{array}{cc}\hfill \phantom{\rule{1.em}{0ex}}& {\mathsf{E}}_{{X}_{0}}\left(\right)open="("\; close=")">{\tau}^{1}+{d}^{1}+\sum _{i=2}^{k+1}({\delta}^{i-1}+\underset{\le T}{\underbrace{{d}^{i}}})1({\chi}^{k}{\tau}_{0})1({\chi}^{k+1}\ge {\tau}_{0})\hfill \end{array}\\ \hfill \phantom{\rule{1.em}{0ex}}& +{\mathsf{E}}_{{X}_{0}}({\delta}^{k}+\underset{\le T}{\underbrace{{d}^{k+1}}})\prod _{j=1}^{k}1({\chi}^{j}{\tau}_{0}).\hfill $$

$${\mathsf{E}}_{{X}_{0}}\prod _{j=1}^{k}1({\chi}^{j}<{\tau}_{0})\le {(1-p\left(T\right))}^{k}.$$

$$\begin{array}{cc}\hfill \phantom{\rule{1.em}{0ex}}& {\mathsf{E}}_{{X}_{0}}({\delta}^{k}+{d}^{k+1})\prod _{j=1}^{k}1({\chi}^{j}<{\tau}_{0})={\mathsf{E}}_{{X}_{0}}{\mathsf{E}}_{{X}_{0}}\left(\right)open="("\; close=")">({\delta}^{k}+{d}^{k+1})\prod _{j=1}^{k}1({\chi}^{j}{\tau}_{0})|{\mathcal{F}}_{{\chi}^{k}}\hfill \end{array}$$

Let us inspect the previous term. Using that ${\delta}^{k-1}$ and all random variables $1({\chi}^{j}<{\tau}_{0})$ are ${\mathcal{F}}_{{\tau}^{k}}$-measurable for any $j\le k-1$, we obtain by induction,

$$\begin{array}{cc}\hfill \phantom{\rule{1.em}{0ex}}& {\mathsf{E}}_{{X}_{0}}\left(\right)open="("\; close=")">({\delta}^{k-1}+\underset{\le T}{\underbrace{{d}^{k}}})\prod _{j=1}^{k}1({\chi}^{j}{\tau}_{0})\le T{\mathsf{E}}_{{X}_{0}}\prod _{j=1}^{k}1({\chi}^{j}{\tau}_{0})+{\mathsf{E}}_{{X}_{0}}{\delta}^{k-1}\prod _{j=1}^{k}1({\chi}^{j}{\tau}_{0})\hfill \end{array}\hfill \\ \hfill \phantom{\rule{1.em}{0ex}}& =T{(1-p\left(T\right))}^{k}+{\mathsf{E}}_{{X}_{0}}{\delta}^{k-1}\prod _{j=1}^{k-1}1({\chi}^{j}{\tau}_{0})\underbrace{\underset{}{{\mathsf{E}}_{{X}_{0}}\left(\right)open="("\; close=")">1({\chi}^{k}{\tau}_{0})|{\mathcal{F}}_{{\tau}^{k}}}}\le 1-p\left(T\right)\hfill $$

Also by induction we find that a similar upper bound with the multiplier ${(1-p\left(T\right))}^{k}$ holds for each term in the sum in the right hand side of (33), for $2\le i<k$, and $k\ge 3$. Indeed, using the identity $1({\chi}^{k}<{\tau}_{0})=\prod _{j=1}^{k}1({\chi}^{j}<{\tau}_{0})$ we have for $2\le i<k$,
It was used that the random variable $({\delta}^{i-1}+{d}^{i}){\prod}_{j=1}^{k-1}1({\chi}^{j}<{\tau}_{0})$ is ${\mathcal{F}}_{{\tau}^{k}}$-measurable.

$$\begin{array}{cc}\hfill \phantom{\rule{1.em}{0ex}}& {\mathsf{E}}_{{X}_{0}}\left(\right)open="("\; close=")">({\delta}^{i-1}+\underset{\le T}{\underbrace{{d}^{i}}})\left(\right)open="("\; close=")">\prod _{j=1}^{k-1}1({\chi}^{j}{\tau}_{0})\hfill & 1({\chi}^{k}{\tau}_{0})\end{array}\\ \hfill \phantom{\rule{1.em}{0ex}}& ={\mathsf{E}}_{{X}_{0}}{\mathsf{E}}_{{X}_{0}}\left(\right)open="("\; close=")">1({\chi}^{k}{\tau}_{0})({\delta}^{i-1}+\underset{\le T}{\underbrace{{d}^{i}}})\prod _{j=1}^{k-1}1({\chi}^{j}{\tau}_{0})|{\mathcal{F}}_{{\tau}^{k}}\hfill $$

Further, by virtue of the bound (31)
We used the bound (35) with ${d}^{k}$ replaced by its upper bound T (as in the calculus leading to (35)) and with k replaced by i.

$$\begin{array}{cc}\hfill \phantom{\rule{1.em}{0ex}}& (1-p\left(T\right)){\mathsf{E}}_{{X}_{0}}({\delta}^{i-1}+T)\prod _{j=1}^{k-1}1({\chi}^{j}<{\tau}_{0})\hfill \\ \hfill \\ \hfill \phantom{\rule{1.em}{0ex}}& =(1-p\left(T\right)){\mathsf{E}}_{{X}_{0}}{\mathsf{E}}_{{X}_{0}}\left(\right)open="("\; close=")">({\delta}^{i-1}+T)\prod _{j=1}^{k-1}1({\chi}^{j}{\tau}_{0})|{\mathcal{F}}_{{\tau}^{i}}\hfill \end{array}\hfill \\ \hfill \phantom{\rule{1.em}{0ex}}& \le {(1-p\left(T\right))}^{k-i+1}{\mathsf{E}}_{{X}_{0}}({\delta}^{i-1}+T)\prod _{j=1}^{i-1}1({\chi}^{j}{\tau}_{0})\hfill \\ \hfill \\ \hfill \phantom{\rule{1.em}{0ex}}& \le {(1-p\left(T\right))}^{k-i+1}C{(1-p\left(T\right))}^{i-1}=C{(1-p\left(T\right))}^{k}.\hfill $$

The first term is estimated similarly with the only change that instead of the constant C we obtain a multiplier $C(x+n+1)$ which is a function of the initial data ${X}_{0}=(n,x)$ and which makes the resulting bound non-uniform with respect to the initial data:

$$\begin{array}{cc}\hfill \phantom{\rule{1.em}{0ex}}& (1-p\left(T\right)){\mathsf{E}}_{n,x}({\tau}^{1}+{d}^{1})\prod _{j=1}^{k-1}1({\chi}^{j}<{\tau}_{0})\le C(n+x+1){(1-p\left(T\right))}^{k}.\hfill \end{array}$$

Overall, collecting the bounds (35) – (37), we get,
Therefore, it follows that
with some new constant C, as required.

$$\begin{array}{c}\hfill {\mathsf{E}}_{{X}_{0}}\left(\right)open="("\; close=")">{\tau}^{1}+{d}^{1}+\sum _{i=2}^{k+1}({\delta}^{i-1}+\underset{\le T}{\underbrace{{d}^{i}}})1({\chi}^{k}{\tau}_{0})1({\chi}^{k+1}\ge {\tau}_{0})\le kCL\left({X}_{0}\right){(1-p\left(T\right))}^{k}.\end{array}$$

$$\begin{array}{c}\hfill {\mathsf{E}}_{{X}_{0}}{\tau}_{0}\le CL\left({X}_{0}\right)+\sum _{k=1}^{\infty}kC{(1-p\left(T\right))}^{k}\le CL\left({X}_{0}\right),\end{array}$$

Let us provide two examples for a comparison to “local” conditions in terms of the intensity of service if the latter exists.

Assume that there exists the derivative function ${F}^{\prime}\left(s\right)$ and that the hasard function $h={H}^{\prime}$ is no less than a constant,
The upper bound for ${J}_{n,x}^{1}+{J}_{n,x}^{2}$ is the same as in the proof of the theorem:
To estimate ${J}_{n,x}^{3}$, in the case of either initial value n, or initial second component x, or both are large enough, by lemma 1 we have similarly to (19)
So, we get,
where the latter inequality is due to the lemma 1 and to the choice of the values of M and $x\left(\epsilon \right)$, see (12) and (13).

$$h\left(s\right)={H}^{\prime}\left(s\right)=\frac{{F}^{\prime}\left(s\right)}{1-F\left(s\right)}\ge \mu ,\phantom{\rule{1.em}{0ex}}s\ge 0.$$

$${J}_{n,x}^{1}+{J}_{n,x}^{2}\le 1+\mathsf{\Lambda}.$$

$$\begin{array}{cc}\hfill \phantom{\rule{1.em}{0ex}}& {J}_{n,x}^{3}\left(1\right):={\mathsf{E}}_{n,x}{\int}_{0}^{1}1({n}_{t}>0)(1+{x}_{t})dH\left({x}_{t}\right)\hfill \\ \hfill \\ \hfill \phantom{\rule{1.em}{0ex}}& ={\mathsf{E}}_{n,x}{\int}_{0}^{1\wedge \gamma}\stackrel{=1}{\overbrace{1({n}_{t}>0)}}(1+{x}_{t})\mu dt+{\mathsf{E}}_{n,x}{\int}_{1\wedge \gamma}^{1}1({n}_{t}>0)(1+{x}_{t})\mu dt\hfill \\ \hfill \\ \hfill \phantom{\rule{1.em}{0ex}}& ={\mathsf{E}}_{n,x}{\int}_{0}^{1\wedge \gamma}(1+x+t)\mu dt+{\mathsf{E}}_{n,x}1(\underset{0\le t\le 1}{\mathrm{inf}}{n}_{t}>0)\underset{\ge \mu (1-1\wedge \gamma )}{\underbrace{{\int}_{1\wedge \gamma}^{1}(1+{x}_{t})\mu dt}}\hfill \\ \hfill \\ \hfill \phantom{\rule{1.em}{0ex}}& +\underset{\ge 0}{\underbrace{{\mathsf{E}}_{n,x}1({\mathrm{inf}}_{0\le t\le 1}{n}_{t}=0){\int}_{1\wedge \gamma}^{1}1({n}_{t}>0)(1+{x}_{t})\mu dt}}\hfill \\ \hfill \\ \hfill \phantom{\rule{1.em}{0ex}}& \ge {\mathsf{E}}_{n,x}{\int}_{0}^{1\wedge \gamma}\underset{\ge 1}{\underbrace{(1+x+t)}}\mu dt+{\mathsf{E}}_{n,x}1(\underset{0\le t\le 1}{\mathrm{inf}}{n}_{t}>0)\mu (1-1\wedge \gamma ).\hfill \end{array}$$

$$\begin{array}{cc}\hfill \phantom{\rule{1.em}{0ex}}& {J}_{n,x}^{3}\left(1\right)\ge \mu {\mathsf{E}}_{n,x}(1\wedge \gamma )+\mu {\mathsf{E}}_{n,x}1(\underset{0\le t\le 1}{\mathrm{inf}}{n}_{t}>0)(1-1\wedge \gamma )\hfill \\ \hfill \\ \hfill \phantom{\rule{1.em}{0ex}}& =\mu {\mathsf{E}}_{n,x}(1\wedge \gamma )+\mu {\mathsf{E}}_{n,x}(1-1\wedge \gamma )-\mu {\mathsf{E}}_{n,x}1(\underset{0\le t\le 1}{\mathrm{inf}}{n}_{t}=0)(1-1\wedge \gamma )\hfill \\ \hfill \\ \hfill \phantom{\rule{1.em}{0ex}}& \ge \mu -\mu {\mathsf{E}}_{n,x}1(\underset{0\le t\le 1}{\mathrm{inf}}{n}_{t}=0)\ge \mu -\mu \epsilon ,\hfill \end{array}$$

Therefore, the condition $\mu >1+\mathsf{\Lambda}$ suffices for the claims of the theorem (assuming all its other conditions are met). This should be compared with the assumption (4). The multiplier 2 in (4) may be regarded as a price for non-local, integral type conditions, see (5) – (6). Note, however, that assumption (38) looks clearly stronger than necessary for the bound obtained. □

Assume that there exists the derivative function ${F}^{\prime}\left(s\right)$ and that the hasard function $h={H}^{\prime}$ satisfies the condition (compare to [16] where similar to (39) condition was assumed, but affine Lyapunov functions were replaced by polynomial ones with higher degrees; respectively, polynomial moments and convergence rate bounds were established for the recurrence times; the larger is μ, the faster convergence towards the invariant measure hods)
with a constant μ.

$$(1+s)h\left(s\right)\ge \mu ,\phantom{\rule{1.em}{0ex}}s\ge 0,$$

Here the upper bound for ${J}_{n,x}^{1}+{J}_{n,x}^{2}$ is the same as in the proof of the theorem and as in the previous example:
To estimate ${J}_{n,x}^{3}$, in the case if either the initial value n, or the initial second component x (or both) is large enough, we have by lemma 1, similarly to (19) and to the previous example,
again due to the definition of M and $x\left(\epsilon \right)$, see (12) and (13). Hence, here the same condition $\mu >1+\mathsf{\Lambda}$ as in the previous example suffices for the claims of the theorem (assuming all other conditions of the theorem are met). Condition (39) is clearly more relaxed than (38), but both assume existence of the intensity of service, which is not required in theorem 1. □

$${J}_{n,x}^{1}+{J}_{n,x}^{2}\le 1+\mathsf{\Lambda}.$$

$$\begin{array}{cc}\hfill \phantom{\rule{1.em}{0ex}}& {J}_{n,x}^{3}\left(1\right):={\mathsf{E}}_{n,x}{\int}_{0}^{1}1({n}_{t}>0)(1+{x}_{t})dH\left({x}_{t}\right)\hfill \\ \hfill \\ \hfill \phantom{\rule{1.em}{0ex}}& ={\mathsf{E}}_{n,x}{\int}_{0}^{1\wedge \gamma}\stackrel{=1}{\overbrace{1({n}_{t}>0)}}\mu dt+{\mathsf{E}}_{n,x}{\int}_{1\wedge \gamma}^{1}1({n}_{t}>0)\mu dt\hfill \\ \hfill \\ \hfill \phantom{\rule{1.em}{0ex}}& ={\mathsf{E}}_{n,x}{\int}_{0}^{1\wedge \gamma}\mu dt+{\mathsf{E}}_{n,x}1(\underset{0\le t\le 1}{\mathrm{inf}}{n}_{t}>0)\underset{=\mu (1-1\wedge \gamma )}{\underbrace{{\int}_{1\wedge \gamma}^{1}\mu dt}}\hfill \\ \hfill \\ \hfill \phantom{\rule{1.em}{0ex}}& +\underset{\ge 0}{\underbrace{{\mathsf{E}}_{n,x}1({\mathrm{inf}}_{0\le t\le 1}{n}_{t}=0){\int}_{1\wedge \gamma}^{1}1({n}_{t}>0)\mu dt}}\hfill \\ \hfill \\ \hfill \phantom{\rule{1.em}{0ex}}& \ge \mu {\mathsf{E}}_{n,x}(1\wedge \gamma )+\mu {\mathsf{E}}_{n,x}1(\underset{0\le t\le 1}{\mathrm{inf}}{n}_{t}>0)(1-1\wedge \gamma )\hfill \\ \hfill \\ \hfill \phantom{\rule{1.em}{0ex}}& =\mu {\mathsf{E}}_{n,x}(1\wedge \gamma )+\mu {\mathsf{E}}_{n,x}(1-1\wedge \gamma )-\mu {\mathsf{E}}_{n,x}1(\underset{0\le t\le 1}{\mathrm{inf}}{n}_{t}=0)(1-1\wedge \gamma )\hfill \\ \hfill \\ \hfill \phantom{\rule{1.em}{0ex}}& \ge \mu -\mu {\mathsf{E}}_{n,x}1(\underset{0\le t\le 1}{\mathrm{inf}}{n}_{t}=0)\ge \mu -\mu \epsilon ,\hfill \end{array}$$

The result may serve as a sufficient condition for the “steady-state” property of the model ${M}_{n}/GI/1$ used as a background in [1,12], et al. It is plausible that the method used in this paper admits extensions to more general models. As it was said in the introduction, there is some moderate hope that it may also be applied to Erlang – Sevastyanov’s type systems, which could potentially allow to find sufficient conditions for convergence rates in such systems without assuming existence of intensity of service, thus, generalising the results from [16].

This research was funded by the Foundation for the Advancement of Theoretical Physics and Mathematics “BASIS”.

Not applicable

Not applicable

The author declares no conflict of interest. The funders had no role in the design of the study, in the writing of the manuscript, and in the decision to publish the results.

Not applicable

- H. Abouee-Mehrizi, O. Baron, State-dependent M/G/1 queueing systems. Queueing Syst., 2016, 82, 121-148. [CrossRef]
- S. Asmussen, Applied Probability and Queues, 2nd edition, Springer, Berlin et al., 2003.
- S. Asmussen and J.L. Teugels, Convergence rates for M/G/1 queues and ruin problems with heavy tails, Journal of Applied Probability, 1996, 33(4), 1181-1190. [CrossRef]
- N. Bambos, J. Walrand, On stability of state-dependent queues and acyclic queueing networks, Adv. Appl. Probab. 1989, 21(3), 681–701. [CrossRef]
- A.A. Borovkov, O.J. Boxma, Z. Palmowski, On the Integral of the Workload Process of the Single Server Queue, Journal of Applied Probability, 2003, 40(1), 200–225.
- M. Bramson, Stability of Queueing Networks, École d’Été de Probabilités de Saint-Flour XXXVI-2006, Lecture Notes in Math., Vol. 1950, 2008.
- D. Fakinos, The Single-Server Queue with Service Depending on Queue Size and with the Preemptive-Resume Last-Come-First-Served Queue Discipline, Journal of Applied Probability, 1987, 24(3), 758–767. [CrossRef]
- R. Fortet, Les fonctions aléatoires en téléphonie automatique probabilités de perte en sélection conjuguée. Ann. Télécommun. 1956, 11, 85–88. [CrossRef]
- M.H.A. Davis, Piecewise-Deterministic Markov Processes: A General Class of Non-Diffusion Stochastic Models, Journal of the Royal Statistical Society. Series B (Methodological), 1984, 46(3), 353-388; http://www.jstor.org/stable/2345677. [CrossRef]
- E.B. Dynkin, Markov processes, V. I, Springer-Verlag, Berlin-Göttingen-Heidelberg, 1965.
- B.V. Gnedenko, I.N. Kovalenko, Introduction to queueing theory. 2nd ed., rev. and suppl. Boston, MA et al., Birkhäuser, 1991.
- Y. Kerner, The conditional distribution of the residual service time in the M
^{n}/G/1 queue, Stochastic Models, 2008, 24(3), 364–375. [CrossRef] - R.Sh. Liptser, A.N. Shiryaev, Stochastic calculus on filtered probability spaces, in: S.V. Anulova, A.Yu. Veretennikov, N.V. Krylov, R.Sh. Liptser, A.N. Shiryaev, Stochastic calculus, Itogi Nauki i Tekhniki, Modern problems of fundamental math. directions, Moscow, VINITI, 1989, 114–159 (in Russian); Engl. transl.: Probability Theory III, Stochastic Calculus, Yu. V. Prokhorov and A. N. Shiryaev Eds., Springer, 1998, 111–157.
- H. Thorisson, The queue M
^{n}/G/1: finite moments of the cycle variables and uniform rates of convergence, Stoch. Proc. Appl., 1985, 19(1), 85–99. - A.Yu. Veretennikov, On the rate of convergence to the stationary distribution in the single-server queuing system, Autom. Remote Control, 2013, 74(10), 1620–1629. [CrossRef]
- A.Yu. Veretennikov, On the rate of convergence for infinite server Erlang–Sevastyanov’s problem, Queueing Syst., 2014, 76(2), 181–203. [CrossRef]
- A.Yu. Veretennikov, An open problem about the rate of convergence in Erlang-Sevastyanov’s model. Queueing Syst., 2022, 100, 357-–359. [CrossRef]
- A.Yu. Veretennikov, G.A. Zverkina, Simple Proof of Dynkin’s formula for Single-Server Systems and Polynomial Convergence Rates, Markov Processes Relat. Fields, 2014, 20, 479–504. [CrossRef]
- G.A. Zverkina, On some extended Erlang–Sevastyanov queueing system and its convergence rate, J. Math. Sci., 2021, 254, 485–503. [CrossRef]

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |

© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.

On Positive Recurrence of M_{n}/GI/1/∞ Model

Alexander Veretennikov

,

2023

© 2024 MDPI (Basel, Switzerland) unless otherwise stated