**Update on 3/10/2021**: I added Example 5 in the Appendix. This generic example encompasses the Riemann Zeta dynamical system. A simple version of this post, targeted to engineers, machine learning professionals and physicists , is available here. The Hurwitz-Riemann zeta map is further discussed here. The content is at the intersection of number theory, probability theory, and dynamical systems.

I recently found a way to "solve" (in a backward way) the functional equation attached to the invariant distribution, for a large class of ergodic, one-dimensional chaotic discrete dynamical systems on $[0,1]$. The methodology is based on the transfer operator. I am trying to see if I can get the exact invariant distribution for even just one very simple case in two dimensions, on the unit square. In the two dimensional case, the two components of the invariant joint distribution are not independent, from a probability theory point of view, and this is the cause of my difficulties.

In one dimension, things are as follows. Let $x_{n+1}=p(x_n)-\lfloor p(x_n)\rfloor$ where the brackets denote the integer part function, $p(x)$ is positive, monotonic, continuous and decreasing (thus bijective) with $p(1)=1$ and $p(0)=\infty$. For instance $p(x)=\frac{1}{x}$ corresponds to the Gauss map associated with continued fractions. More generally, let $F(x)=r(x+1)-r(1)$ be the **invariant distribution** with
$$F(0)=0, F(1)=1, F(x)=0 \mbox{ if } x<0, F(x)=1 \mbox{ if } x>1.$$ Thus $r(x)$ must be increasing on $[1,2]$ and $r(2)=1+r(1)$. Not any function can be an invariant distribution, also called attractor distribution in probability theory. Define $R(x)=r(x+1)-r(x)$. Then we can retrieve $p(x)$ (under some conditions) using the formula

$$p(x)=R^{-1}(r(x+1)-r(1))=R^{-1}(F(x)) .$$

A few examples are included in Appendix 1, including the Gauss map. The derivative of the invariant distribution is the invariant density, and with the above methodology, it is straightforward to get, prove, and generalize a bunch of classic results (for instance the Khinchin's constant or the Gauss-Kuzmin distribution) known for continued fractions, to a bunch of dynamical systems (see Appendix 1). As an illustration, In Appendix 2, I compute the exact value of the lag-1 autocorrelation between successive iterates of $x_n$ for the dynamical system attached to continued fractions (the Gauss map). This could be a new result (albeit easy to obtain) as I have never seen it mentioned anywhere.

The full autocorrelation function is an alternative tool to the Lyapunov exponent to measure the amount of chaos present in the system.

**Problem in two dimensions**

I am wondering if it is possible, even for the most basic of all maps, that is the angle doubling map also called Bernoulli map (not covered by my methodology), to get a simple closed form for the invariant distribution, for a 2-D version of the map. I started with the following 2-D system, which is a map associated with 2-D continued fractions:

$$x_{n+1}=\frac{1}{y_n}-\Big\lfloor\frac{1}{y_n}\Big\rfloor,\\ y_{n+1}=\frac{1}{x_n}-\Big\lfloor\frac{1}{x_n}\Big\rfloor.$$

The orbit $(x_n,y_n)$ is dense on the unit square for almost all initial conditions $(x_0,y_0)$. Of cause the marginals of the invariant joint distribution are just the invariant distributions of the 1-D case for such a basic system, yet I was unable to get the full invariant joint distribution and understand its intricacies. In short, $F_{X,Y}(x,y)\neq F_X(x)F_Y(y)$.

Note that we can attach a bi-dimensional numeration system to this 2-D dynamical system. The digits of $(x_0,y_0)$ are $(a_n,b_n)$ for $n=0,1,\dots$, defined by $a_n=\lfloor 1/y_n\rfloor$ and $b_n=\lfloor 1/x_n\rfloor$. If you know the digits, you can reconstruct $(x_0,y_0)$ as follows. Start with $N$ large enough,say $N=20$ if you want about 20 digits of accuracy. Set $(u_N,v_N)=(0,0)$, Proceed backwards as follows, from $n=N-1$ down to $n=0$:

$$u_n = \frac{1}{v_{n+1}+b_n} - \Big\lfloor \frac{1}{v_{n+1}+b_n} \Big\rfloor,\\ v_n = \frac{1}{u_{n+1}+a_n} - \Big\lfloor \frac{1}{u_{n+1}+a_n} \Big\rfloor. $$ If $N=\infty$ then $(u_0,v_0)=(x_0,y_0)$. Another bi-dimensional numeration system (an extension of the standard base-$b$ numeration system) is discussed here.

**Appendix 1: Exact solution for various 1-D dynamical systems**

First, I'll introduce the concept of **digits associated with a dynamical system**. It generalizes the concept of classical numeration systems. The digit $a_n$ of $x_0$ is defined as $a_n=\lfloor p(x_n)\rfloor$ for $n=0,1,\dots$. Even though the dynamical systems discussed here are not invertible, it is possible to compute $x_0$ if you only known its digits, thanks to the fact that $p(x)$ is invertible. To compute $x_0$ based on its digits, start with $N$ large (say $N=20$), $u_N=0$, and proceed iteratively backwards starting with $n=N-1$, back down to $n=0$, using the recursion

$$u_n=q(u_{n+1} + a_n)-\lfloor q(u_{n+1} + a_n)\rfloor.$$

Here $q(x)=p^{-1}(x)$. If you start with $N=\infty$, then $x_0=u_0$. For the Gauss map, the digits are simply the numbers that appear in the continued fraction expansion $x_0=1/(a_0+1/(a_1+1/(a_2+\dots)))$. Note that $x_0\in[0,1]$ and the digits can take on any integer value greater than $1$. For the Gauss map, numbers $x_0$ that are rational are excluded. Similar restrictions apply to other maps. The expectation of the digits (the average of all the digits of $x_0$ for any $x_0$ but a set of Lebesgue measure zero), is finite if and only if $\int_0^\epsilon p(x)f(x)dx<\infty$ for any $\epsilon\in ]0,1]$. Here $f(x)$ is the invariant density, that is, the derivative of the invariant distribution. Likewise, the variance of the digits is finite if and only if $\int_0^\epsilon p^2(x)f(x)dx<\infty$ for any $\epsilon\in ]0,1]$. If the expectation and variance are finite, then it is possible to compute the autocorrelation between two successive digits. Obviously, both the expectation and variance are infinite for the digits of the Gauss map (continued fractions). Below we discuss a few dynamical systems, some with finite and some with infinite expectation / variance for the digits.

**Example 1**

$$r(x)=-2\cdot\frac{x+1}{x}\Rightarrow F(x)=\frac{2x}{x+1}, R(x)=\frac{2}{x(x+1)},\\p(x)=\frac{-1+\sqrt{5+4/x}}{2}, q(x)=\frac{4}{(2x+1)^2-5}.$$ In this case $$P(a_n = k)=F(q(k))-F(q(k+1))=\frac{4}{k(k+1)(k+2)}.$$ Thus the average digit is $E(a_n)=\sum_{k=1}^\infty kP(a_n=k)=2$.

**Example 2**

$$r(x)=\lambda\log\frac{\alpha+x}{\alpha+1} \mbox{ with }\lambda=\Big(\log\frac{\alpha+2}{\alpha+1}\Big)^{-1}\Rightarrow F(x)=\lambda\log\frac{\alpha+x+1}{\alpha+1}, \\ p(x)=\frac{\alpha+1}{x}-\alpha, q(x)=\frac{\alpha+1}{\alpha+x}.$$ Here $\alpha\geq 0$. If $\alpha=0$, then this is just the Gauss map and we are dealing with continued fractions.

**Example 3**

$$r(x)=\lambda\log\frac{x(x+1)}{2} \mbox{ with }\lambda=\frac{1}{\log 3}\Rightarrow F(x)=\lambda\log\frac{(x+1)(x+2)}{2}, \\ p(x)=\frac{4}{x(x+3)}, q(x)=\frac{3}{2}\Big(-1+\sqrt{1+\frac{16}{9x}}\Big).$$

**Example 4**

$$r(x)=\lambda \alpha^{w(x)} \mbox{ with }\lambda=\frac{1}{\alpha^4-\alpha^2},w(x)=2^x\Rightarrow F(x)=\lambda(\alpha^{w(x+1)}-\alpha^2), \\ p(x)=\log_2\log_\alpha\Big[\frac{1-\sqrt{1+4(w^2-\alpha^2)}}{2}\Big].$$

Here $0<\alpha<\frac{\sqrt{2}}{2}$. Other values of $\alpha$ don't work. The more simple case $r(x)=\lambda \alpha^x$ leads to nowhere.

**Example 5**

Let $\psi$ be a positive, monotonic decreasing function with $\psi(0)=\infty$ and $\int_1^\infty\psi(x)dx<\infty$. $$\mbox{If } r(x)=-\frac{1}{\psi(1)}\cdot \sum_{k=0}^\infty \psi(x+k), \mbox{ then } F(x)= \frac{1}{\psi(1)}\cdot \sum_{k=1}^\infty \Big(\psi(k)-\psi(x+k)\Big),\\ R(x)=\frac{\psi(x)}{\psi(1)}, \mbox{ and } p(x)=\psi^{-1}\Big[\sum_{k=1}^\infty \Big(\psi(k)-\psi(x+k)\Big)\Big].$$ Also, the probability that an arbitrary digit $\lfloor p(x_n)\rfloor$ is equal to $k$, is $(\psi(k)-\psi(k+1))/\psi(1)$, for $k=1,2$ and so on. An interesting case involving the Hurwitz and Riemann Zeta functions is when $\psi(x)=x^{-s}$ with $s>1$.

**Appendix 2: Exact autocorrelation for the Gauss map (continued fractions)**

For the Gauss map $x_{n+1}=1/x_n-\lfloor 1/x_n\rfloor$, the invariant distribution is $F(x)=\log_2(1+x)$ with $0\leq x \leq 1$. The invariant density $f(x)$ is the derivative of the distribution in question. The expectation and variance are easy to compute, for instance $E(x_n)=\int_0^1 x f(x)dx$. The covariance $C(x_n,x_{n+1})=E(x_n x_{n+1})-E(x_n)E(x_{n+1})$ is a bit more delicate:

$$E(x_nx_{n+1})=\int_0^1 x\Big(\frac{1}{x}-\Big\lfloor\frac{1}{x}\Big\rfloor\Big)f(x)dx=1-\int_0^1 x\Big\lfloor\frac{1}{x}\Big\rfloor f(x)dx.$$

I tried Mathematica to evaluate the rightmost integral, to non-avail. I ended up computing it manually. It is equal to

$$\frac{1}{\log 2}\sum_{k=1}^\infty \Big(\frac{1}{k+1}-k\log\frac{(k+1)^2}{k(k+2)}\Big) =\frac{\gamma}{\log 2} $$ where $\gamma$ is the Euler–Mascheroni constant. Putting everything together (I spare you the details), the lag-1 autocorrelation between $x_n$ and $x_{n+1}$, that is the limit of the empirical lag-1 autocorrelation computed on the first $n$ iterates as $n\rightarrow\infty$, is $$\rho=2\cdot \frac{(2-\gamma)\log 2 - 1}{3\log 2 - 2} \approx -0.347$$

This is confirmed by empirical evidence, and it is true for almost all $x_0\in [0, 1]$ (in the Lebesgue sense). Exceptions to the rule include numbers such as $x_0=\sqrt{2}-1$, the golden ratio, rational and quadratic irrational numbers. By comparison, the lag-$k$ autocorrelation for the multiplicative system $x_{n+1}=bx_n-\lfloor bx_n\rfloor$ attached the the numeration system in integer base $b>1$, is $b^{-k}$. The additive system $x_{n+1}=x_n+\alpha - \lfloor x_n +\alpha\rfloor$ with $\alpha$ irrational, exhibits very strong long-range autocorrelations, making it one of the least chaotic dynamical systems among the chaotic ones. Unlike the Gauss map or the angle doubling map ($b=2$), it does not have exceptions (that is, numbers $x_0$ not following the dominant invariant distribution) because its invariant distribution is unique. The exceptions to the angle doubling map are all the numbers $x_0$ that are not normal in base $2$.