I want to find the quadratic form of A, but what I get from $P^TAP$ is not in quadratic form:
A = Matrix([[4,4,0,-3], [4,4,3,0], [0,3,4,4], [-3,0,4,4]])
eigen_data = A.eigenvects()
eigenvectors = [vec.transpose() for val, mult, vecs in eigen_data for vec in vecs]
P = Matrix(eigenvectors).transpose()
The columns here is not unit vectors. Noting that all columns have norm $\frac{5/sqrt{2}}{3}$, I did:
PU = P/5/sqrt(2)*3
But this doesn't have just diagonal entries, so it's something that goes wrong...:
PU.transpose()*A*PU
Your matrix is symmetric (so should have N orthonormal eigenvectors). However, it also has repeated eigenvalues (so the library routines won't necessarily find them - you'd have to find an orthonormal set by Gram-Schmidt or similar).
As already pointed out, this means that you can't assume P(transpose) is the same as P(inverse), and the general diagonalisation should be done with P^(-1), not P^(T).
Here with standard numpy routines:
import numpy as np
A = np.array( [ [4,4,0,-3], [4,4,3,0], [0,3,4,4], [-3,0,4,4] ] )
L, P = np.linalg.eig( A ) # columns of P are the eigenvectors
print( "Eigenvalues:", L )
np.set_printoptions(formatter={'float': "{:8.4f}\t".format})
print( "\nP matrix:\n", P )
print( "\nP^T A P =\n", (P.T) @ A @ P )
print( "\nP^-1 A P =\n", np.linalg.inv(P) @ A @ P )
Output:
Eigenvalues: [ 9. -1. 9. -1.]
P matrix:
[[ 0.7071 0.7071 -0.0196 0.1715 ]
[ 0.5657 -0.5657 -0.4398 0.2744 ]
[ 0.0000 0.0000 -0.7068 -0.6860 ]
[ -0.4243 0.4243 -0.5537 0.6517 ]]
P^T A P =
[[ 9.0000 -0.0000 -0.2499 0.0000 ]
[ -0.0000 -1.0000 0.0000 -0.2425 ]
[ -0.2499 0.0000 9.0000 -0.0000 ]
[ -0.0000 -0.2425 -0.0000 -1.0000 ]]
P^-1 A P =
[[ 9.0000 -0.0000 0.0000 0.0000 ]
[ -0.0000 -1.0000 -0.0000 0.0000 ]
[ 0.0000 0.0000 9.0000 -0.0000 ]
[ -0.0000 -0.0000 0.0000 -1.0000 ]]