Extracted Text


Paraneter-based Fishers information of orthogonal polynomials.pdf

Journal of Computational and Applied Mathematics 214 (2008) 136 – 147
www.elsevier.com/locate/cam
Parameter-based Fisher’s information of orthogonal polynomials
J.S. Dehesa
a,c,∗
, B. Olmos
a,c
, R.J.Yáñez
b,c
a
Departamento de Física Moderna, Universidad de Granada, 18071-Granada, Spain
b
Departamento de Matemática Aplicada, Universidad de Granada, 18071-Granada, Spain
c
Instituto Carlos I de Física Teórica y Computacional, Universidad de Granada, 18071-Granada, Spain
Received 21 April 2006; received in revised form 8 February 2007
Abstract
The Fisher information of the classical orthogonal polynomials with respect to a parameter is introduced, its interest justified and
its explicit expression for the Jacobi, Laguerre, Gegenbauer and Grosjean polynomials found.
© 2007 Elsevier B.V. All rights reserved.
MSC:33C45; 42C04; 94A17; 62B10; 05E35; 94A15
Keywords:Fisher information; Classical orthogonal polynomials; Gegenbauer polynomials; Grosjean polynomials; Jacobi polynomials; Laguerre
polynomials
1. Introduction
Let{∗
≡(x)≡∗(x|≡);x∈∈⊂R}be a family of probability densities parametrized by a parameter≡∈R. The
Fisher information of∗
≡(x)with respect to the parameter≡is defined[5,13]as
I(∗
≡):=



∗ln∗(x|≡)
∗≡

2
∗(x|≡)dx=



∗∗(x|≡)/∗≡

2
∗(x|≡)
dx
=4



∗[∗(x|≡)
1/2
]
∗≡

2
dx. (1)
This quantity refers to information about an unknown parameter in the probability density. It is a measure of the ability
to estimate the parameter≡. It gives the minimum error in estimating the parameter of the probability density∗
≡(x)[4].
The notion of Fisher information was introduced by Sir R.A. Fisher in estimation theory[13]. Nowadays, it is being used
in numerous scientific areas ranging from statistics, information theory[4]to signal analysis[42]and quantum physics

Corresponding author. Instituto Carlos I de Física Teórica y Computacional, Universidad de Granada, E-18071 Granada, Spain.
Tel.: +34 958 243215; fax: +34 958 242862.
E-mail addresses:dehesa@ugr.es(J.S. Dehesa),ryanez@ugr.es(R.J. Yáñez).
0377-0427/$ - see front matter © 2007 Elsevier B.V. All rights reserved.
doi:10.1016/j.cam.2007.02.016

J.S. Dehesa et al. / Journal of Computational and Applied Mathematics 214 (2008) 136 – 147137
[14,15,23]. This information-theoretic quantity has, among other characteristics, a number of important properties,
beyond the mere nonnegativity, which deserve to be resembled here.
(1) Additivity for independent events. In the case that∗(x, y|≡)=∗
1(x|≡)·∗
2(y|≡), it happens that
I[∗(x, y|≡)]=I[∗
1(x|≡)]·I[∗
2(y|≡)].
(2) Scaling invariance. The Fisher information is invariant under sufficient transformationsy=t(x), so that
I[∗(y|≡)]=I[∗(x|≡)].
This property is not only closely related to the Fisher maximum likelihood method but also it is very important for
the theory of statistical inference.
(3) Cramer–Rao inequality[41]. It states that the reciprocal of the Fisher informationI(∗
≡)bounds from below the
mean square error of an unbiased estimatorfof the parameter≡; i.e.,

2
(f )∗
1
I(∗
≡)
,
where⊂
2
(f )denotes the variance off. This inequality, which lies at the heart of statistical estimation theory, shows
how much information the distribution provides about a parameter. Moreover, this says that the Fisher information
I(∗
≡)is a more sensitive indicator of the localization of the probability density than the Shannon entropy power.
(4) Relation to other information-theoretic properties. The Fisher information is related to the Shannon entropy of
∗(x|≡)via the elegant de Bruijn’s identity[4,24,41]

∗≡
S(˜∗
≡)=
1
2
I(˜∗
≡),
where˜∗
≡denotes the convolution probability density of any probability density∗(x|≡)with the normal density
with zero mean and variance≡>0, andS(˜∗
≡):= −


˜∗
≡(x)ln˜∗
≡(x)dxis the Shannon entropy of˜∗
≡(x).
Moreover, the Fisher informationI(∗
≡)satisfies, under proper regularity conditions, the limiting property[14]
I(∗
≡)=lim
→→0
2

2
D(∗
≡+→∗
≡),
where the symbolD(pq):=


p(x)ln(p(x)/q(x))dxdenotes the relative entropy or Kullback–Leibler di-
vergence of the probability densitiesp(x)andq(x). Further connections of the Fisher information with other
information-theoretic properties are known; see e.g.,[15,40,41].
(5) Applications in quantum physics. The classical orthogonal polynomials appear as the radial part of the wavefunc-
tions which characterize the stationary quantum-mechanical states of numerous physical and chemical systems.
It is well known, at least for quantum physicists and a large group of applied mathematicians, that the wave-
functions are the physically admissible solutions of the nonrelativistic Schrödinger equation of motion for these
systems, which for polar spherical coordinates can often be separated in radial and angular parts. The square
of these wavefunctions is a real probability density which, when the system is electrically charged, denotes the
experimentally accessible distribution of charge of the system. Often, this density is essentially the Rakhmanov
density of orthogonal polynomials as defined by Eq. (5) of this paper. The quantum-mechanical properties of the
physical systems completely depend on the spreading of the charge all over the space, that is, on the distribution
of the Rakhmanov density all over the corresponding orthogonality support. Furthermore the charge distribution
is mostly controlled by the parameter of the involved orthogonal polynomials. The Fisher information is one of
the best estimators of this parameter. Up until now the explicit computation of this information-theoretic measure
has not been performed.
In addition, the Fisher informationI(∗
≡)plays a fundamental role in the quantum-mechanical description of
physical systems[8,11,14,15,17,19,20,24,25,28–33,40]. It has been shown
(a) to be a measure of both disorder and uncertainty[14,15]as well as a measure of nonclassicality for quantum
systems[19,20],

138 J.S. Dehesa et al. / Journal of Computational and Applied Mathematics 214 (2008) 136 – 147
(b) to describe, some factor apart, various macroscopic quantities such as, for example, the kinetic energy[24,40]and
the Weiszäcker energy[28,29,31],
(c) to derive the Schrödinger and Klein–Gordon equations of motion[15]as well as the Euler equation of the density
functional theory[25], from the principle of minimum Fisher information,
(d) to predict the most distinctive nonlinear spectral phenomena, known as avoided crossings, encountered in atomic
and molecular systems under strong external fields[17], and
(e) to be involved in numerous uncertainty inequalities[10,20,21,23,24,28,32,33,41].
These applications are most apparent when≡is a parameter of locality, so that∗
≡(x)=∗(x+≡). Then, the Fisher
information for locality, also called intrinsic accuracy[40], does not depend on the parameter≡; so, without loss of
generality, we may set the location at the origin and the Fisher information of the density∗
0(x)≡∗(x)becomes
I(∗)=

[∗

(x)]
2
∗(x)
dx,
where∗

(x)denotes the first derivative of∗(x). The locality Fisher information or Fisher information associated with
translations of an one-dimensional observablexwith probability density∗(x)has been calculated in the literature[4,42]
for some simple families of probability densities. In particular, it is well known that the locality Fisher information
of the Gaussian density is equal to the reciprocal of its variance, what illustrates the spreading character of these
information-theoretic measures.
Recently, Dehesa and Sánchez-Ruiz[39]have exactly derived the locality Fisher information of a wider and much
more involved class of probability densities, the Rakhmanov densities, defined by

n(x)=
1
d
2
n
p
2
n
(x)(x)
[a,b](x), (2)
where
[a,b](x)is the characteristic function for the interval[a,b], and{p n(x)}denotes a sequence of real polynomials
orthogonal with respect to the nonnegative definite weight function(x)on the interval[a,b]⊆R, that is

b
a
pn(x)pm(x)(x)dx=d
2
n
n,m (3)
with degp
n(x)=n. As first pointed out by Rakhmanov[27], these densities play a fundamental role in the analytic
theory of orthogonal polynomials. In particular, he has shown that these probability densities govern the asymptotic
behavior of the ratiop
n+1(x)/pn(x)asn→∞. On the other hand, these two fundamental and applied reasons
have motivated an increasing interest for the determination of the spreading of the classical orthogonal polynomials
{p
n(x)}throughout its interval of orthogonality by means of the information-theoretic measures of their corresponding
Rakhmanov densities∗(x)[3,6–11,39].
The Shannon information entropy of these densities has been examined numerically[3]. On the theoretical side, let
us point out that its asymptotics(n→∞)is well known for all classical orthogonal polynomials, but its exact value for
every fixednis only known for Chebyshev polynomials[43]and some Gegenbauer polynomials[2,37]. To this respect
see[9]which reviews the knowledge up to 2001. The variance and Fisher information entropy of the Rakhmanov
densities have been found in a closed and compact form for all classical orthogonal polynomials[11,39]. For other
functionals of these Rakhmanov densities see Ref.[38].
In this paper we shall calculate the Fisher informationI(∗
≡)for the real and continuous classical orthogonal polyno-
mials which are parameter dependent, namely the Jacobi and Laguerre polynomials. In addition, we have considered
the Fisher quantities of two specific families of Jacobi polynomials: the Gegenbauer or ultraspherical polynomials
and the Grosjean polynomials, not only because of its intrinsic interest but also because, in particular, the Gegenbauer
polynomials control the bulky shape of the physical systems with spherically symmetric potentials. Needless to say that
we do not consider the Hermite polynomialsH
n(x)because it is well known that they do not depend on any specific
parameter.
The structure of the paper is the following. We begin in Section 2 with the definition of this notion and, as well, we
collect here some basic properties of the classical orthogonal polynomials which will be used later on. Then, the Fisher

J.S. Dehesa et al. / Journal of Computational and Applied Mathematics 214 (2008) 136 – 147139
information with respect to a parameter is fully determined for Jacobi and Laguerre polynomials in Section 3, and for
Gegenbauer and Grosjean polynomials in Section 4. Finally, conclusions and some open problems are given.
2. Some properties of the parameter-dependent classical orthogonal polynomials
Let{˜y
n(x;≡)}
n∈N0
stand for the sequence of polynomials orthonormal with respect to the nonnegative definite weight
function(x;≡)on the real support(a, b), so that

b
a
˜yn(x;≡)˜y m(x;≡)(x;≡)dx= nm, (4)
with deg˜y
n=n. Here we shall consider the celebrated classical families of LaguerreL
()
n
(x)and JacobiJ
(, )
n
(x)
polynomials. The normalized-to-unity density functions˜∗
n(x;≡)defined as
˜∗
n(x;≡)=(x;≡)˜y
2
n
(x;≡) (5)
are called for Rakhmanov densities[27].
Here we gather various properties of the parameter-dependent classical orthogonal polynomials in a real and contin-
uous variable (i.e., Laguerre and Jacobi) in the form of two lemmas, which shall be used later on. The weight function
of these polynomials can be written as
(x;≡)=h(x)[t(x)]

(6)
with
h
L(x)=e
−x
andt L(x)=x (7)
for the Laguerre case,L
(≡)
n
(x), and
h
J(x)=(1+x)

andt J(x)=1−x (8)
for the Jacobi case,P
(≡, )
n
(x).
Lemma 1.The derivative of the orthonormal polynomial˜y
n(x;≡)with respect to the parameter≡is given by

∗≡
˜y
n(x;≡)=
n

k=0
˜A
k(≡)˜yk(x;≡) (9)
with
˜A
k(≡)=
d
k(≡)
dn(≡)
A
k(≡)fork=0,1,...,n−1, (10)
˜A
n(≡)=A n(≡)−
1
dn(≡)

∗≡
[d
n(≡)], (11)
whered
2
m
(≡)is,according to Eq.(3),the normalization constant of the orthogonal polynomialp m(x)=y m(x;≡),and
A
k(≡)withk=0,1,...are the expansion coefficients of the derivative ofy m(x;≡)in terms of the system{y m(x;≡)};
i.e.,

∗≡
y
m(x;≡)=
m

k=0
Ak(≡)yk(x;≡). (12)

140 J.S. Dehesa et al. / Journal of Computational and Applied Mathematics 214 (2008) 136 – 147
Both quantitiesd m(≡)andA m(≡)are known in the literature for the Laguerre and Jacobi cases. Indeed, the norms
for the LaguerreL
()
k
(x)and the JacobiP
(, )
k
(x)polynomials[26]are
[d
(L)
k
()]
2
=
(k++1)
k!
, (13)
[d
(J )
k
(, )]
2
=
2
+ +1
(k++1) (k+ +1)
k!(2k++ +1) (k++ +1)
, (14)
respectively.
On the other hand the expansion coefficients in Eq. (12) are known[16,22,36]to have the form
A
(L)
k
=
1
n−k
fork=0,1,...,n−1 andA
(L)
n
=0 (15)
for the Laguerre polynomialsL
()
n
, and
A
(J)
k
=
+ +1+2k
(n−k)(+ +1+n+k)
( +k+1)
n−k
(+ +k+1)
n−k
;k=0,1,...,n−1, (16)
A
(J)
n
=
n−1

k=0
1
+ +1+n+k
=(1++ +2n)−(1++ +n), (17)
for the Jacobi expansion of(∗/∗)P
(, )
n
(x), and
A
(J )
k
=(−1)
n−k
+ +1+2k
(n−k)(+ +1+n+k)
(+k+1)
n−k
(+ +k+1)
n−k
;k=0,1,...,n−1, (18)
A
(J )
n
=
n−1

k=0
1
+ +1+n+k
=(1++ +2n)−(1++ +n), (19)
for the Jacobi expansion of(∗/∗ )P
(, )
n
(x). The symbol(x)=

(x)/ (x)denotes the well-known digamma function.
It is worth to remark that the superindicesJ
andJ
indicate the expansion coefficients for the derivatives of the Jacobi
polynomial with respect to the first and second parameter, respectively.
Then, taking into account Eqs. (10)–(11) and Eqs. (13) and (15), Lemma 1 says that the expansion coefficients˜A
k()
for the orthonormal Laguerre polynomials˜L
()
n
(x)are
˜A
(L)
k
=
1
n−k

(k+1)
n−k
(k++1)
n−k

1/2
,k=0,1,...,n−1, (20)
˜A
(L)
n
=−
(n++1)
2
. (21)
In a similar way the same lemma together with Eqs. (10)–(11) and (14)–(17) has allowed us to find the expressions
˜A
(J)
k
=

(k+ +1)
n−k(k+1)
n−k
(k++1)
n−k(k++ +1)
n−k
2n++ +1
2k++ +1

1/2
2k++ +1
(n−k)(n+k++ +1)
,
k=0,1,...,n−1, (22)
˜A
(J)
n
=
1
2

2(2n++ +1)−(n++ +1)−(n++1)−ln 2+
1
2n++ +1

, (23)

J.S. Dehesa et al. / Journal of Computational and Applied Mathematics 214 (2008) 136 – 147141
for the expansion coefficients˜A
(J)
k
of the derivative with respect to the parameterof the Jacobi polynomialsP
(, )
n
(x).
Finally, the same procedure with Eqs. (10)–(11), (14), (18) and (19) leads to the values
˜A
(J )
k
=

(k++1)
n−k(k+1)
n−k
(k+ +1)
n−k(k++ +1)
n−k
2n++ +1
2k++ +1

1/2
×(−1)
n−k
2k++ +1
(n−k)(n+k++ +1)
;k=0,1,...,n−1, (24)
˜A
(J )
n
=
1
2

2(2n++ +1)−(n++ +1)−(n+ +1)−ln 2+
1
2n++ +1

, (25)
for the expansion coefficients˜A
(J )
k
(k=0,1,...,n) in Eq. (9) of the -derivative of the Jacobi polynomialsP
(, )
n
(x).
Lemma 2.The parameter-dependent classical orthonormal polynomials˜y
n(x;≡)satisfy
(a)

b
a
∗(x;≡)
∗≡
[˜y
n(x;≡)]
2
dx=−2˜A n(≡),
(b)

b
a
∗(x;≡)
∗≡
˜y
n(x;≡)˜y k(x;≡)dx=−˜A k(≡), k=0,1,...,n−1,
(c)

b
a

2
(x;≡)
∗≡
2
[˜yn(x;≡)]
2
dx=2
n

k=0
(˜Ak(≡))
2
+2(˜A n(≡))
2
−2
∗˜A
n(≡)
∗≡
.
Proof.To prove integrals (a) and (b) we have to derive with respect to the parameter≡the orthonormalization condition
(4) form=nandm=k =n, respectively. Then one has to use Lemma 1 and again Eq. (4), and the results follow.
Integral (c) is obtained by deriving integral (a) with respect to≡and taking into account the values of the two previous
integrals (a) and (b).≡
3. Parameter-based Fisher information of Jacobi and Laguerre polynomials
The distribution of the orthonormal polynomials˜y
n(x;≡)on their orthonormality interval and the spreading of the
associated Rakhmanov densities can be most appropriately measured by means of their information-theoretic measures,
the Shannon entropy and the Fisher information[13]. The former has been theoretically[9,6]and numerically[3]
examined for general orthogonal polynomials, while for the latter it has been studied the Fisher information associated
with translations of the variable (i.e., the locality Fisher information) both analytically[39]and numerically[11]. Here
we extend this study by means of the computation of a more general concept, the parameter-based Fisher information
of the polynomials˜y
n(x;≡). This quantity is defined as the Fisher information of the associated Rakhmanov density
(5) with respect to the parameter≡; that is, according to Eq. (1), by
I
n(≡)=4

b
a


∗≡
[˜∗
n(x;≡)]
1/2

2
dx (26)
with
[˜∗
n(x;≡)]
1/2
=[(x;≡)]
1/2
˜yn(x;≡).
Theorem 1.The parameter-based Fisher informationI
n(≡)of the parameter-dependent classical orthonormal poly-
nomials˜y
n(x;≡)(i.e.,Jacobi and Laguerre)defined by Eq.(26)has the value
I
n(≡)=2
n−1

k=0
[˜Ak(≡)]
2
−2
∗˜A
n(≡)
∗≡
, (27)

142 J.S. Dehesa et al. / Journal of Computational and Applied Mathematics 214 (2008) 136 – 147
where˜A k(≡),k=0,1,...,nare the expansion coefficients of the derivative with respect to≡of˜y n(x;≡)in terms of
the polynomials{˜y
k(x;≡)}
n
k=0
,which are given by Lemma1.See Eqs.(20)–(21)and(22)–(25)for the Laguerre and
Jacobi families,respectively.
Remark.Let us underline that the Fisher quantities of orthogonal, monic orthogonal and orthonormal polynomials
have the same value because of Eqs. (5) and (26), keeping in mind the probabilistic character of the Rakhmanov density
and the fact that all these polynomials are orthogonal with respect to the same weight function.
Proof.To prove this theorem we begin with Eq. (26). Then, the derivative with respect to≡and Lemma 1 leads to

∗≡
[˜∗
n(x;≡)]
1/2
=[(x;≡)]
1/2
n

k=0
˜A
k(≡)˜yk(x;≡)+
∗[(x;≡)]
1/2
∗≡
˜y
n(x;≡).
The substitution of this expression into Eq. (26) and the consideration of the orthonormalization condition (4) have led
us to
I
n(≡)=J 1+J2+J3,
where
J
1=4

b
a
(x;≡)

n

k=0
˜A
k(≡)˜yk(x;≡)

2
dx=4
n

k=0
[˜Ak(≡)]
2
,
J
2=4

b
a

∗[(x;≡)]
1/2
∗≡

2
[˜yn(x;≡)]
2
dx,
and
J
3=8
n

k=0
˜A
k(≡)

b
a
[(x;≡)]
1/2
∗[(x;≡)]
1/2
∗≡
˜y
n(x;≡)˜y k(x;≡)dx.
Now we take into account that the weight function of the parameter-dependent families of classical orthonormal
polynomials in a real and continuous variable (i.e., Laguerre and Jacobi) has the form(x;≡)=h(x)[t(x)]

, so that

∗[(x;≡)]
1/2
∗≡

2
=
1
4
(x;≡)[lnt(x)]
2
=
1
4

2
(x;≡)
∗≡
2
,
[(x;≡)]
1/2
∗[(x;≡)]
1/2
∗≡
=
1
2
(x;≡)lnt(x)=
1
2
∗(x;≡)
∗≡
.
The use of these two expressions in the integralsJ
2andJ 3together with Lemma 2 leads to Eq. (27).≡
Corollary 1.The Fisher information with respect to the parameter,I
(L)
n
(),of the Laguerre polynomial˜L
()
n
(x)is
I
(L)
n
()=2
n−1

k=0
[˜A
(L)
k
]
2
−2
∗˜A
(L)
n

(28)
=
(1)
(n++1)+
2n
n+
4
F
3

11 1 1 −n
221 −−n




1

,
where
(1)
(x)=(d/dx)(x)is the trigamma function.

J.S. Dehesa et al. / Journal of Computational and Applied Mathematics 214 (2008) 136 – 147143
Corollary 2.The Fisher information with respect to the parameter,I
(J)
n
(, ),of the Jacobi polynomial˜P
(, )
n
(x)
has the value
I
(J)
n
(, )=2
n−1

k=0
[˜A
(J)
k
]
2
−2
∗˜A
(J)
n

=2
(n+ +1)n!(2n++ +1)
(n++1) (n++ +1)
×
n−1

k=0
(k++1) (k++ +1)(2k++ +1)
(k+ +1)k!(n−k)
2
(n+k++ +1)
2
−2
(1)
(2n++ +1)+
(1)
(n++ +1)
+
(1)
(n++1)+
1
(2n++ +1)
2
, (29)
and the Fisher information with respect to the parameter ,I
(J )
n
(, ),of the Jacobi polynomial˜P
(, )
n
(x)has the
value
I
(J )
n
(, )=2
n−1

k=0
[˜A
(J )
k
]
2
−2
∗˜A
(J )
n

=2
(n++1)n!(2n++ +1)
(n+ +1) (n++ +1)
×
n−1

k=0
(k+ +1) (k++ +1)(2k++ +1)
(k++1)k!(n−k)
2
(n+k++ +1)
2
−2
(1)
(2n++ +1)+
(1)
(n++ +1)
+
(1)
(n+ +1)+
1
(2n++ +1)
2
. (30)
Both corollaries follow from Theorem 1 in a straightforward manner by taking into account expressions (20)–(25) for
the expansion coefficients˜A
kof the corresponding families. Let us underline thatJ andJ
indicate Fisher information
with respect to the first and second parameter, respectively, of the Jacobi polynomial.
4. Parameter-based Fisher information of the Gegenbauer and Grosjean polynomials
In this section we describe the Fisher information of two important subfamilies of the Jacobi polynomialsP
(, )
n
(x):
the ultraspherical or Gegenbauer polynomials[1,9,26], which have= , and the Grosjean polynomials of the first
and second kind[12,18,34,35], which have+ =∓1, respectively. Let us remark that the parameter-based Fisher
information for these subfamilies cannot be obtained from the expressions of the similar quantity for the Jacobi
polynomials (given by Corollary 2) by means of a mere substitution of the parameters, because it depends on the
derivative with respect to the parameter(s) and nowand are correlated.
The Gegenbauer polynomialsC
()
n
(x)are Jacobi-like polynomials satisfying the orthogonality condition (3) with
the weight function
C(x;)=(1−x
2
)
−1/2
,>−
1
2
, and the normalization constant
[d
(C)
k
()]
2
=
2
1−2
(k+2)
k!(k+)
2
()

144 J.S. Dehesa et al. / Journal of Computational and Applied Mathematics 214 (2008) 136 – 147
so that they can be expressed as
C
()
n
(x)=
(2)
n
(+
1
2
)
n
P
(−1/2,−1/2)
n
(x).
It is known[22]that expansion (12) for the derivative ofC
()
n
(x)with respect to the parameterhas the coefficients
A
(C)
k
()=
2(1+(−1)
n−k
)(k+)
(k+n+2)(n−k)
fork=0,1,...,n−1,
A
(C)
n
()=
n−1

k=0
2(k+1)
(2k+2+1)(k+2)
+
2
k+n+2
=(n+)−().
Then, according to Eqs. (10)–(11) of Lemma 1, expansion (9) for the derivative of the orthonormal Gegenbauer
polynomials has the following coefficients:
˜A
(C)
k
()=

(k+2)n!(n+)
(n+2)k!(k+)

1/2
2(1+(−1)
n−k
)(k+)
(k+n+2)(n−k)
fork=0,1,...,n−1,
˜A
(C)
n
()=(n+)−(n+2)+ln 2+
1
2(n+)
.
Theorem 1 provides, according to Eq. (27), the following value for the Fisher information of the Gegenbauer polynomials
C
()
n
(x)with respect to the parameter:
I
(C)
n
()=
16n!(n+)
(n+2)
n−1

k=0
(1+(−1)
n−k
) (k+2)(k+)
k!(k+n+2)
2
(n−k)
2
−2
(1)
(n+)+4
(1)
(n+2)+
1
(n+)
2
.
Let us now do the same job for the Grosjean polynomials of the first and second kind, which are the monic Jacobi
polynomialsˆP
(, )
n
(x)with+ =∓1, respectively. So, we have[18,34]
G
()
n
(x)=c nP
(,−1−)
n
(x),−1<<0
and
g
()
n
(x)=e nP
(,1−)
n
(x),−1<<2,
for the Grosjean polynomials of first and second kind, respectively, with the values
c
n=2
n

2n−1
n

−1
,en=2
n

2n+1
n

−1
.
The Grosjean polynomials of the first kindG
()
n
(x)satisfy the orthogonality condition (3) with the weight function

G(x;)=

1−x
1+x


1
1+x
,
and the normalization constant
[d
(G)
n
()]
2
=
2
2n−1

2
(n)

2
(2n)
(n++1) (n−).
These polynomials, together with the Chebyshev polynomials of the first, second, third and fourth kind, are the only
Jacobi polynomials for which the associated polynomials are again Jacobi polynomials[18].

J.S. Dehesa et al. / Journal of Computational and Applied Mathematics 214 (2008) 136 – 147145
Now, expansion (12) for the derivative of the polynomialsG
()
n
(x)with respect to the parametercan be
obtained as
∗G
()
n
(x)

=
∗ˆP
(,−1−)
n
(x)

=
∗ˆP
(, )
n
(x)






=−1−

∗ˆP
(, )
n
(x)






=−1−
=
n−1

k=0
A
(G)
k
()G
()
n
(x)
with
A
(G)
k
()=
2
n−k+1
k
n
2
−k
2
(2k) (n+1)
(2n) (k+1)
[(k−)
n−k−(−1)
n−k
(k++1)
n−k]
fork=0,1,...,n−1 andA
(G)
n
()=0. Then, Lemma 1 provides expansion (9) for the derivative of the orthonormal
Grosjean polynomials with the coefficients
˜A
(G)
k
()=
2n
n
2
−k
2
(k−)
n−k−(−1)
n−k
(k++1)
n−k
[(k++1)
n−k(k−)
n−k]
1/2
fork=0,1,...,n−1,
˜A
(G)
n
()=
1
2
[(n−)−(n++1)].
Finally, Eq. (27) of Theorem 1 allows us to find the following value for the Fisher information of the Grosjean
polynomials of the first kind:
I
(G)
n
()=8n
2
n−1

k=0
1
(n
2
−k
2
)
2
[(k−)
n−k−(−1)
n−k
(k++1)
n−k]
2
(k++1)
n−k(k−)
n−k
+
(1)
(n−)+
(1)
(n++1).
On the other hand, the Grosjean polynomials of the second kindg
()
n
(x)satisfy the orthogonality property (3) with the
weight function

g(x;)=

1−x
1+x


(1+x),
and the normalization constant
[d
(g)
n
()]
2
=
2
2n+1

2
(n)

2
(2n+2)
(n++1) (n−+2).
Moreover, the derivative of these polynomials with respect to the parametercan be expanded in the form (9) as
∗g
()
n
(x)

=
∗ˆP
(,1−)
n
(x)

=
∗ˆP
(, )
n
(x)






=1−

∗ˆP
(, )
n
(x)






=1−
=
n−1

k=0
A
(g)
k
()g
()
n
(x)
with
A
(g)
k
()=
2
n−k+1
(k+1)
(n−k)(n+k+2)
(2k+2) (n+1)
(2n+2) (k+1)
[(k+2−)
n−k−(−1)
n−k
(k++1)
n−k]

146 J.S. Dehesa et al. / Journal of Computational and Applied Mathematics 214 (2008) 136 – 147
fork=0,1,...,n−1 andA
(g)
n
()=0. Then, Lemma 1 is able to provide the analogous expansion (9) for the orthonormal
polynomials with the coefficients
˜A
(g)
k
()=
2(k+1)
(n−k)(n+k+2)
(k+2−)
n−k−(−1)
n−k
(k++1)
n−k
[(k++1)
n−k(k+2−)
n−k]
1/2
,fork=0,1,...,n−1,
˜A
(g)
n
()=
1
2
[(n+2−)−(n++1)].
Finally, we obtain by means of Eq. (27) of Theorem 1 the Fisher information of the Grosjean polynomials of the second
kind, which turns out to have the value
I
(g)
n
()=8
n−1

k=0
(k+1)
2
(n−k)
2
(n+k+2)
2
[(k+2−)
n−k−(−1)
n−k
(k++1)
n−k]
2
(k++1)
n−k(k+2−)
n−k
+
(1)
(n+2−)+
(1)
(n++1).
5. Conclusions and open problems
In summary, we have calculated the parameter-based Fisher information for the classical orthogonal polynomials of
a continuous and real variable with a parameter dependence; namely, the Jacobi and Laguerre polynomials. Then we
have evaluated the corresponding Fisher quantity for the two most relevant parameter-dependent Jacobi polynomials
P
(, )
n
(x): the Gegenbauer (= ) and the Grosjean (+ =±1) polynomials.
This paper, together with Ref.[39], opens the way for the development of the Fisher estimation theory of the
Rakhmanov density for continuous and discrete orthogonal polynomials in and beyond the Askey scheme. This fun-
damental task in approximation theory includes the determination of the spreading of the orthogonal polynomials
throughout its orthogonality domain by means of the Fisher information with a locality property. All these mathemati-
cal questions have a straightforward application to quantum systems because their wavefunctions are often controlled
by orthogonal polynomials, so that the probability densities which describe the quantum-mechanical states of these
physical systems are just the Rakhmanov densities of the corresponding orthogonal polynomials. In particular, they
correspond to the ground and excited states of the physical systems with an exactly solvable spherically symmetric
potential[26], including the most common prototypes (harmonic oscillator, hydrogen atom,...), in both position and
momentum spaces.
Among the open problems let us first mention the computation of the parameter-based Fisher information of the
generalized Hermite polynomials, the Bessel polynomials and the Pollaczek polynomials. A much more ambitious
problem is the evaluation of the Fisher quantity for the general Wilson orthogonal polynomials.
On the other hand, nothing is known for discrete orthogonal polynomials. In this case, however, the very notion of
the parameter-based Fisher information is a subtle question.
Acknowledgments
This work has been partially supported by the MEC Project No. FIS2005-00973, by the European Research Network
on ConstructiveApproximation (NeCCA) INTAS-03-51-6637 and by the J.A. Project of Excellence with ref. FQM-481.
We belong to the P.A.I. Group FQM-207 of the Junta de Andalucía, Spain.
References
[1]W. van Assche, R.J.Yánez, R. González-Ferez, J.S. Dehesa, Functionals of Gegenbauer polynomials andD-dimensional hydrogenic momentum
expectation values, J. Math. Phys. 41 (2000) 6600–6613.
[2]V.S. Buyarov, On information entropy of Gegenbauer polynomials, Vestnik. Moskov. Univ. Ser. I Mat. Mekh. 6 (1997) 8–11 (in Russian).
[3]V.S. Buyarov, J.S. Dehesa, A. Martínez-Finkelshtein, J. Sánchez-Lara, Computation of the entropy of polynomials orthogonal on an interval,
SIAM J. Sci. Comput. 26 (2005) 488–509.
[4]T.M. Cover, J.A. Thomas, Elements of Information Theory, Wiley, NY, 1991.

J.S. Dehesa et al. / Journal of Computational and Applied Mathematics 214 (2008) 136 – 147147
[5]H. Cramer, Mathematical Methods of Statistics, Princeton University Press, Princeton, NJ, 1946.
[6]J.S. Dehesa, W. van Assche, R.J. Yánez, Information entropy of classical orthogonal polynomials and their application to the harmonic oscillator
and Coulomb potentials, Meth. Appl. Anal. 4 (1997) 91–110.
[7]J.S. Dehesa, S. López-Rosa, B. Olmos, R.J.Yánez, Information measures of hydrogenic systems, Laguerre polynomials and spherical harmonics,
J. Comput. Appl. Math. 179 (2005) 185–194.
[8]J.S. Dehesa, S. Lopez-Rosa, B. Olmos, R.J. Yánez, The Fisher information ofD-dimensional hydrogenic systems in position and momentum
spaces, J. Math. Phys. 47 (5) (2006) 052104.
[9]J.S. Dehesa, A. Martínez-Finkelshtein, J. Sánchez-Ruiz, Quantum information entropies and orthogonal polynomials, J. Comput. Appl. Math.
133 (2001) 23–46.
[10]J.S. Dehesa, A. Martínez-Finkelshtein, V.N. Sorokin, Information-theoretic measures for Morse and Pöschl–Teller potentials, Molecular Phys.
104 (4) (2006) 613–622.
[11]J.S. Dehesa, P. Sánchez-Moreno, R.J. Yánez, Cramer–Rao information plane of orthogonal hypergeometric polynomials, J. Comput. Appl.
Math. 186 (2006) 523–541.
[12]H. Dette, First return probabilities and birth and death chains and associated orthogonal polynomials, Proc. Amer. Math. Soc. 129 (2000)
1805–1815.
[13]R.A. Fisher, Theory of statistical estimation, Proc. Cambridge Philos. Soc. 22 (1925) 700–725 (Reprinted in Collected Papers of R.A. Fisher,
edited by J.H. Bennett, University of Adelaide Press, South Australia, 1972, pp. 15–40).
[14]B.R. Frieden, Fisher information and uncertainty complementarity, Phys. Lett. A 169 (1992) 123–130.
[15]B.R. Frieden, Science from Fisher Information, Cambridge University Press, Boston, 2004.
[16]J. Froehlich, Parameter derivatives of the Jacobi polynomials and the Gaussian hypergeometric function, Integral Transform. Spec. Funct. 2 (4)
(1994) 253–266.
[17]R. González-Ferez, J.S. Dehesa, Characterization of atomic avoided crossings by means of Fisher information, European Phys. J. D 32 (2005)
39–43.
[18]C.C. Grosjean, The weight functions, generating functions and miscellaneous properties of the sequences of orthogonal polynomials of second
kind associated with the Jacobi and Gegenbauer polynomials, J. Comput. Appl. Math. 16 (1986) 259–307.
[19]M.J.W. Hall, Quantum properties of classical Fisher information, Phys. Rev. A 62 (2000) 012107.
[20]M.J.W. Hall, Exact uncertainty relations, Phys. Rev. A 64 (2001) 052103.
[21]O. Johnson, A. Barron, Fisher information inequalities and the central limit theorem, Probab. Theory Related Fields 129 (2004) 391–409.
[22]W. Koepf, D. Schmersau, Representations of orthogonal polynomials, J. Comput. Appl. Math. 90 (1998) 57–94.
[23]S. Luo, A variation of the Heisenberg uncertainty relation involving an average, J. Phys. A: Math. Gen. 34 (2001) 3289–3291.
[24]S. Luo, Fisher information, kinetic energy and uncertainty relation inequalities, J. Phys. A: Math. Gen. 35 (2002) 5181–5187.
[25]A. Nagy, Fisher information in density functional theory, J. Chem. Phys. 119 (2003) 9401–9405.
[26]A. Nikiforov, V. Uvarov, Special Functions in Mathematical Physics, Birkhauser, Basel, 1988.
[27]E.A. Rakhmanov, On the asymptotics of the ratio of orthogonal polynomials, Math. USSR Sb. 32 (1977) 199–213.
[28]E. Romera, J.C. Angulo, J.S. Dehesa, Fisher entropy and uncertainty-like relationships in many-particle systems, Phys. Rev. A 59 (1999)
4064–4067.
[29]E. Romera, J.S. Dehesa, Weiszäcker energy of many-electron systems, Phys. Rev. A 50 (1994) 256–266.
[30]E. Romera, J.S. Dehesa, The Fisher–Shannon information plane, an electron correlation tool, J. Chem. Phys. 120 (2004) 8096.
[31]E. Romera, J.S. Dehesa, R.J. Yánez, The Weizsäcker functional: some rigorous results, Internat. J. Quantum Chemistry 56 (1995) 627–632.
[32]E. Romera, P. Sánchez-Moreno, J.S. Dehesa, The Fisher information of single-particle systems with a central potential, Chem. Phys. Lett. 414
(2005) 468–472.
[33]E. Romera, P. Sánchez-Moreno, J.S. Dehesa, Uncertainty relation for Fisher information ofD-dimensional single-particle systems with central
potentials, J. Math. Phys. 47 (2006) 103504.
[34]A. Ronveaux, W. van Assche, Upward extension of the Jacobi matrix for orthogonal polynomials, J. Approx. Theory 86 (1996) 335–357.
[35]A. Ronveaux, J.S.Dehesa, A. Zarzo, R.J. Yáñez, A note on the zeroes of Grosjean polynomials, Newsletter of the SIAM Activity Group on
Orthogonal Polynomials and Special Functions, vol. 6 (2), 1996, p. 15.
[36]A. Ronveaux, A. Zarzo, I. Area, E. Godoy, Classical orthogonal polynomials: dependence of parameters, J. Comput. Appl. Math. 121 (2000)
95–112.
[37]J. Sánchez-Ruiz, Information entropy of Gegenbauer polynomials and Gaussian quadrature, J. Phys. A: Math. Gen. 36 (2003) 4857–4865.
[38]J. Sánchez Ruiz, J.S. Dehesa, Entropic integrals of orthogonal hypergeometric polynomials with general supports, J. Comput. Appl. Math. 118
(2000) 311–322.
[39]J. Sánchez-Ruiz, J.S. Dehesa, Fisher information of orthogonal hypergeometric polynomials, J. Comput. Appl. Math. 182 (2005) 150–160.
[40]S.B. Sears, R.G. Parr, U. Dinur, On the quantum-mechanical kinetic energy as a measure of the information in a distribution, Israel J. Chem.
19 (1980) 165–173.
[41]A. Stam, Some inequalities satisfied by the quantities of information of Fisher and Shannon, Inform. and Control 2 (1959) 101.
[42]C. Vignat, J.F. Bercher, Analysis of signals in the Fisher–Shannon information plane, Phys. Lett. A 312 (2003) 27–33.
[43]
R.J. Yánez, W. van Assche, J.S. Dehesa, Position and momentum information entropies of theD-dimensional harmonic oscillator and hydrogen
atom, Phys. Rev. A 50 (1994) 3065–3079.