Jack function
Generalization of the Jack polynomial
In mathematics , the Jack function is a generalization of the Jack polynomial , introduced by Henry Jack . The Jack polynomial is a homogeneous , symmetric polynomial which generalizes the Schur and zonal polynomials, and is in turn generalized by the Heckman–Opdam polynomials and Macdonald polynomials .
Definition
The Jack function
J
κ κ -->
(
α α -->
)
(
x
1
,
x
2
,
… … -->
,
x
m
)
{\displaystyle J_{\kappa }^{(\alpha )}(x_{1},x_{2},\ldots ,x_{m})}
of an integer partition
κ κ -->
{\displaystyle \kappa }
, parameter
α α -->
{\displaystyle \alpha }
, and arguments
x
1
,
x
2
,
… … -->
,
x
m
{\displaystyle x_{1},x_{2},\ldots ,x_{m}}
can be recursively defined as
follows:
For m =1
J
k
(
α α -->
)
(
x
1
)
=
x
1
k
(
1
+
α α -->
)
⋯ ⋯ -->
(
1
+
(
k
− − -->
1
)
α α -->
)
{\displaystyle J_{k}^{(\alpha )}(x_{1})=x_{1}^{k}(1+\alpha )\cdots (1+(k-1)\alpha )}
For m >1
J
κ κ -->
(
α α -->
)
(
x
1
,
x
2
,
… … -->
,
x
m
)
=
∑ ∑ -->
μ μ -->
J
μ μ -->
(
α α -->
)
(
x
1
,
x
2
,
… … -->
,
x
m
− − -->
1
)
x
m
|
κ κ -->
/
μ μ -->
|
β β -->
κ κ -->
μ μ -->
,
{\displaystyle J_{\kappa }^{(\alpha )}(x_{1},x_{2},\ldots ,x_{m})=\sum _{\mu }J_{\mu }^{(\alpha )}(x_{1},x_{2},\ldots ,x_{m-1})x_{m}^{|\kappa /\mu |}\beta _{\kappa \mu },}
where the summation is over all partitions
μ μ -->
{\displaystyle \mu }
such that the skew partition
κ κ -->
/
μ μ -->
{\displaystyle \kappa /\mu }
is a horizontal strip , namely
κ κ -->
1
≥ ≥ -->
μ μ -->
1
≥ ≥ -->
κ κ -->
2
≥ ≥ -->
μ μ -->
2
≥ ≥ -->
⋯ ⋯ -->
≥ ≥ -->
κ κ -->
n
− − -->
1
≥ ≥ -->
μ μ -->
n
− − -->
1
≥ ≥ -->
κ κ -->
n
{\displaystyle \kappa _{1}\geq \mu _{1}\geq \kappa _{2}\geq \mu _{2}\geq \cdots \geq \kappa _{n-1}\geq \mu _{n-1}\geq \kappa _{n}}
(
μ μ -->
n
{\displaystyle \mu _{n}}
must be zero or otherwise
J
μ μ -->
(
x
1
,
… … -->
,
x
n
− − -->
1
)
=
0
{\displaystyle J_{\mu }(x_{1},\ldots ,x_{n-1})=0}
) and
β β -->
κ κ -->
μ μ -->
=
∏ ∏ -->
(
i
,
j
)
∈ ∈ -->
κ κ -->
B
κ κ -->
μ μ -->
κ κ -->
(
i
,
j
)
∏ ∏ -->
(
i
,
j
)
∈ ∈ -->
μ μ -->
B
κ κ -->
μ μ -->
μ μ -->
(
i
,
j
)
,
{\displaystyle \beta _{\kappa \mu }={\frac {\prod _{(i,j)\in \kappa }B_{\kappa \mu }^{\kappa }(i,j)}{\prod _{(i,j)\in \mu }B_{\kappa \mu }^{\mu }(i,j)}},}
where
B
κ κ -->
μ μ -->
ν ν -->
(
i
,
j
)
{\displaystyle B_{\kappa \mu }^{\nu }(i,j)}
equals
κ κ -->
j
′
− − -->
i
+
α α -->
(
κ κ -->
i
− − -->
j
+
1
)
{\displaystyle \kappa _{j}'-i+\alpha (\kappa _{i}-j+1)}
if
κ κ -->
j
′
=
μ μ -->
j
′
{\displaystyle \kappa _{j}'=\mu _{j}'}
and
κ κ -->
j
′
− − -->
i
+
1
+
α α -->
(
κ κ -->
i
− − -->
j
)
{\displaystyle \kappa _{j}'-i+1+\alpha (\kappa _{i}-j)}
otherwise. The expressions
κ κ -->
′
{\displaystyle \kappa '}
and
μ μ -->
′
{\displaystyle \mu '}
refer to the conjugate partitions of
κ κ -->
{\displaystyle \kappa }
and
μ μ -->
{\displaystyle \mu }
, respectively. The notation
(
i
,
j
)
∈ ∈ -->
κ κ -->
{\displaystyle (i,j)\in \kappa }
means that the product is taken over all coordinates
(
i
,
j
)
{\displaystyle (i,j)}
of boxes in the Young diagram of the partition
κ κ -->
{\displaystyle \kappa }
.
In 1997, F. Knop and S. Sahi gave a purely combinatorial formula for the Jack polynomials
J
μ μ -->
(
α α -->
)
{\displaystyle J_{\mu }^{(\alpha )}}
in n variables:
J
μ μ -->
(
α α -->
)
=
∑ ∑ -->
T
d
T
(
α α -->
)
∏ ∏ -->
s
∈ ∈ -->
T
x
T
(
s
)
.
{\displaystyle J_{\mu }^{(\alpha )}=\sum _{T}d_{T}(\alpha )\prod _{s\in T}x_{T(s)}.}
The sum is taken over all admissible tableaux of shape
λ λ -->
,
{\displaystyle \lambda ,}
and
d
T
(
α α -->
)
=
∏ ∏ -->
s
∈ ∈ -->
T
critical
d
λ λ -->
(
α α -->
)
(
s
)
{\displaystyle d_{T}(\alpha )=\prod _{s\in T{\text{ critical}}}d_{\lambda }(\alpha )(s)}
with
d
λ λ -->
(
α α -->
)
(
s
)
=
α α -->
(
a
λ λ -->
(
s
)
+
1
)
+
(
l
λ λ -->
(
s
)
+
1
)
.
{\displaystyle d_{\lambda }(\alpha )(s)=\alpha (a_{\lambda }(s)+1)+(l_{\lambda }(s)+1).}
An admissible tableau of shape
λ λ -->
{\displaystyle \lambda }
is a filling of the Young diagram
λ λ -->
{\displaystyle \lambda }
with numbers 1,2,…,n such that for any box (i ,j ) in the tableau,
T
(
i
,
j
)
≠ ≠ -->
T
(
i
′
,
j
)
{\displaystyle T(i,j)\neq T(i',j)}
whenever
i
′
>
i
.
{\displaystyle i'>i.}
T
(
i
,
j
)
≠ ≠ -->
T
(
i
,
j
− − -->
1
)
{\displaystyle T(i,j)\neq T(i,j-1)}
whenever
j
>
1
{\displaystyle j>1}
and
i
′
<
i
.
{\displaystyle i'<i.}
A box
s
=
(
i
,
j
)
∈ ∈ -->
λ λ -->
{\displaystyle s=(i,j)\in \lambda }
is critical for the tableau T if
j
>
1
{\displaystyle j>1}
and
T
(
i
,
j
)
=
T
(
i
,
j
− − -->
1
)
.
{\displaystyle T(i,j)=T(i,j-1).}
This result can be seen as a special case of the more general combinatorial formula for Macdonald polynomials .
C normalization
The Jack functions form an orthogonal basis in a space of symmetric polynomials, with inner product:
⟨ ⟨ -->
f
,
g
⟩ ⟩ -->
=
∫ ∫ -->
[
0
,
2
π π -->
]
n
f
(
e
i
θ θ -->
1
,
… … -->
,
e
i
θ θ -->
n
)
g
(
e
i
θ θ -->
1
,
… … -->
,
e
i
θ θ -->
n
)
¯ ¯ -->
∏ ∏ -->
1
≤ ≤ -->
j
<
k
≤ ≤ -->
n
|
e
i
θ θ -->
j
− − -->
e
i
θ θ -->
k
|
2
α α -->
d
θ θ -->
1
⋯ ⋯ -->
d
θ θ -->
n
{\displaystyle \langle f,g\rangle =\int _{[0,2\pi ]^{n}}f\left(e^{i\theta _{1}},\ldots ,e^{i\theta _{n}}\right){\overline {g\left(e^{i\theta _{1}},\ldots ,e^{i\theta _{n}}\right)}}\prod _{1\leq j<k\leq n}\left|e^{i\theta _{j}}-e^{i\theta _{k}}\right|^{\frac {2}{\alpha }}d\theta _{1}\cdots d\theta _{n}}
This orthogonality property is unaffected by normalization. The normalization defined above is typically referred to as the J normalization. The C normalization is defined as
C
κ κ -->
(
α α -->
)
(
x
1
,
… … -->
,
x
n
)
=
α α -->
|
κ κ -->
|
(
|
κ κ -->
|
)
!
j
κ κ -->
J
κ κ -->
(
α α -->
)
(
x
1
,
… … -->
,
x
n
)
,
{\displaystyle C_{\kappa }^{(\alpha )}(x_{1},\ldots ,x_{n})={\frac {\alpha ^{|\kappa |}(|\kappa |)!}{j_{\kappa }}}J_{\kappa }^{(\alpha )}(x_{1},\ldots ,x_{n}),}
where
j
κ κ -->
=
∏ ∏ -->
(
i
,
j
)
∈ ∈ -->
κ κ -->
(
κ κ -->
j
′
− − -->
i
+
α α -->
(
κ κ -->
i
− − -->
j
+
1
)
)
(
κ κ -->
j
′
− − -->
i
+
1
+
α α -->
(
κ κ -->
i
− − -->
j
)
)
.
{\displaystyle j_{\kappa }=\prod _{(i,j)\in \kappa }\left(\kappa _{j}'-i+\alpha \left(\kappa _{i}-j+1\right)\right)\left(\kappa _{j}'-i+1+\alpha \left(\kappa _{i}-j\right)\right).}
For
α α -->
=
2
,
C
κ κ -->
(
2
)
(
x
1
,
… … -->
,
x
n
)
{\displaystyle \alpha =2,C_{\kappa }^{(2)}(x_{1},\ldots ,x_{n})}
is often denoted by
C
κ κ -->
(
x
1
,
… … -->
,
x
n
)
{\displaystyle C_{\kappa }(x_{1},\ldots ,x_{n})}
and called the Zonal polynomial .
P normalization
The P normalization is given by the identity
J
λ λ -->
=
H
λ λ -->
′
P
λ λ -->
{\displaystyle J_{\lambda }=H'_{\lambda }P_{\lambda }}
, where
H
λ λ -->
′
=
∏ ∏ -->
s
∈ ∈ -->
λ λ -->
(
α α -->
a
λ λ -->
(
s
)
+
l
λ λ -->
(
s
)
+
1
)
{\displaystyle H'_{\lambda }=\prod _{s\in \lambda }(\alpha a_{\lambda }(s)+l_{\lambda }(s)+1)}
where
a
λ λ -->
{\displaystyle a_{\lambda }}
and
l
λ λ -->
{\displaystyle l_{\lambda }}
denotes the arm and leg length respectively. Therefore, for
α α -->
=
1
,
P
λ λ -->
{\displaystyle \alpha =1,P_{\lambda }}
is the usual Schur function.
Similar to Schur polynomials,
P
λ λ -->
{\displaystyle P_{\lambda }}
can be expressed as a sum over Young tableaux. However, one need to add an extra weight to each tableau that depends on the parameter
α α -->
{\displaystyle \alpha }
.
Thus, a formula for the Jack function
P
λ λ -->
{\displaystyle P_{\lambda }}
is given by
P
λ λ -->
=
∑ ∑ -->
T
ψ ψ -->
T
(
α α -->
)
∏ ∏ -->
s
∈ ∈ -->
λ λ -->
x
T
(
s
)
{\displaystyle P_{\lambda }=\sum _{T}\psi _{T}(\alpha )\prod _{s\in \lambda }x_{T(s)}}
where the sum is taken over all tableaux of shape
λ λ -->
{\displaystyle \lambda }
, and
T
(
s
)
{\displaystyle T(s)}
denotes the entry in box s of T .
The weight
ψ ψ -->
T
(
α α -->
)
{\displaystyle \psi _{T}(\alpha )}
can be defined in the following fashion: Each tableau T of shape
λ λ -->
{\displaystyle \lambda }
can be interpreted as a sequence of partitions
∅ ∅ -->
=
ν ν -->
1
→ → -->
ν ν -->
2
→ → -->
⋯ ⋯ -->
→ → -->
ν ν -->
n
=
λ λ -->
{\displaystyle \emptyset =\nu _{1}\to \nu _{2}\to \dots \to \nu _{n}=\lambda }
where
ν ν -->
i
+
1
/
ν ν -->
i
{\displaystyle \nu _{i+1}/\nu _{i}}
defines the skew shape with content i in T . Then
ψ ψ -->
T
(
α α -->
)
=
∏ ∏ -->
i
ψ ψ -->
ν ν -->
i
+
1
/
ν ν -->
i
(
α α -->
)
{\displaystyle \psi _{T}(\alpha )=\prod _{i}\psi _{\nu _{i+1}/\nu _{i}}(\alpha )}
where
ψ ψ -->
λ λ -->
/
μ μ -->
(
α α -->
)
=
∏ ∏ -->
s
∈ ∈ -->
R
λ λ -->
/
μ μ -->
− − -->
C
λ λ -->
/
μ μ -->
(
α α -->
a
μ μ -->
(
s
)
+
l
μ μ -->
(
s
)
+
1
)
(
α α -->
a
μ μ -->
(
s
)
+
l
μ μ -->
(
s
)
+
α α -->
)
(
α α -->
a
λ λ -->
(
s
)
+
l
λ λ -->
(
s
)
+
α α -->
)
(
α α -->
a
λ λ -->
(
s
)
+
l
λ λ -->
(
s
)
+
1
)
{\displaystyle \psi _{\lambda /\mu }(\alpha )=\prod _{s\in R_{\lambda /\mu }-C_{\lambda /\mu }}{\frac {(\alpha a_{\mu }(s)+l_{\mu }(s)+1)}{(\alpha a_{\mu }(s)+l_{\mu }(s)+\alpha )}}{\frac {(\alpha a_{\lambda }(s)+l_{\lambda }(s)+\alpha )}{(\alpha a_{\lambda }(s)+l_{\lambda }(s)+1)}}}
and the product is taken only over all boxes s in
λ λ -->
{\displaystyle \lambda }
such that s has a box from
λ λ -->
/
μ μ -->
{\displaystyle \lambda /\mu }
in the same row, but not in the same column.
Connection with the Schur polynomial
When
α α -->
=
1
{\displaystyle \alpha =1}
the Jack function is a scalar multiple of the Schur polynomial
J
κ κ -->
(
1
)
(
x
1
,
x
2
,
… … -->
,
x
n
)
=
H
κ κ -->
s
κ κ -->
(
x
1
,
x
2
,
… … -->
,
x
n
)
,
{\displaystyle J_{\kappa }^{(1)}(x_{1},x_{2},\ldots ,x_{n})=H_{\kappa }s_{\kappa }(x_{1},x_{2},\ldots ,x_{n}),}
where
H
κ κ -->
=
∏ ∏ -->
(
i
,
j
)
∈ ∈ -->
κ κ -->
h
κ κ -->
(
i
,
j
)
=
∏ ∏ -->
(
i
,
j
)
∈ ∈ -->
κ κ -->
(
κ κ -->
i
+
κ κ -->
j
′
− − -->
i
− − -->
j
+
1
)
{\displaystyle H_{\kappa }=\prod _{(i,j)\in \kappa }h_{\kappa }(i,j)=\prod _{(i,j)\in \kappa }(\kappa _{i}+\kappa _{j}'-i-j+1)}
is the product of all hook lengths of
κ κ -->
{\displaystyle \kappa }
.
Properties
If the partition has more parts than the number of variables, then the Jack function is 0:
J
κ κ -->
(
α α -->
)
(
x
1
,
x
2
,
… … -->
,
x
m
)
=
0
,
if
κ κ -->
m
+
1
>
0.
{\displaystyle J_{\kappa }^{(\alpha )}(x_{1},x_{2},\ldots ,x_{m})=0,{\mbox{ if }}\kappa _{m+1}>0.}
Matrix argument
In some texts, especially in random matrix theory, authors have found it more convenient to use a matrix argument in the Jack function. The connection is simple. If
X
{\displaystyle X}
is a matrix with eigenvalues
x
1
,
x
2
,
… … -->
,
x
m
{\displaystyle x_{1},x_{2},\ldots ,x_{m}}
, then
J
κ κ -->
(
α α -->
)
(
X
)
=
J
κ κ -->
(
α α -->
)
(
x
1
,
x
2
,
… … -->
,
x
m
)
.
{\displaystyle J_{\kappa }^{(\alpha )}(X)=J_{\kappa }^{(\alpha )}(x_{1},x_{2},\ldots ,x_{m}).}
References
Demmel, James ; Koev, Plamen (2006), "Accurate and efficient evaluation of Schur and Jack functions", Mathematics of Computation , 75 (253): 223–239, CiteSeerX 10.1.1.134.5248 , doi :10.1090/S0025-5718-05-01780-1 , MR 2176397 .
Jack, Henry (1970–1971), "A class of symmetric polynomials with a parameter", Proceedings of the Royal Society of Edinburgh , Section A. Mathematics, 69 : 1–18, MR 0289462 .
Knop, Friedrich; Sahi, Siddhartha (19 March 1997), "A recursion and a combinatorial formula for Jack polynomials", Inventiones Mathematicae , 128 (1): 9–22, arXiv :q-alg/9610016 , Bibcode :1997InMat.128....9K , doi :10.1007/s002220050134 , S2CID 7188322
Macdonald, I. G. (1995), Symmetric functions and Hall polynomials , Oxford Mathematical Monographs (2nd ed.), New York: Oxford University Press, ISBN 978-0-19-853489-1 , MR 1354144
Stanley, Richard P. (1989), "Some combinatorial properties of Jack symmetric functions", Advances in Mathematics , 77 (1): 76–115, doi :10.1016/0001-8708(89)90015-7 , MR 1014073 .
External links