I guess there are references out there with a few examples, but my Google-fu failed me and so did the documentation page for Genie.
Let's take a very specific example, some Vala code from there:
using GLib;
using Gsl;
public class Test : GLib.Object
{
public static void main (string[] args)
{
double[] a_data = new double[] { 0.18, 0.60, 0.57, 0.96,
0.41, 0.24, 0.99, 0.58,
0.14, 0.30, 0.97, 0.66,
0.51, 0.13, 0.19, 0.85 };
double[] b_data = new double[] { 1.0, 2.0, 3.0, 4.0 };
MatrixView m = MatrixView.array (a_data, 4, 4);
VectorView b = VectorView.array (b_data);
Vector x = new Vector (4);
int s;
Permutation p = new Permutation (4);
LinAlg.LU_decomp ((Matrix)(&m.matrix), p, out s);
LinAlg.LU_solve ((Matrix)(&m.matrix), p, (Vector)(&b.vector), x);
stdout.printf("x = \n");
Vector.fprintf(stdout, x, "%g");
}
}
Here is how I tried to convert that program into Genie:
[indent=4]
uses Gsl
init
a_data: array of double = { 0.18, 0.60, 0.57, 0.96,
0.41, 0.24, 0.99, 0.58,
0.14, 0.30, 0.97, 0.66,
0.51, 0.13, 0.19, 0.85 }
b_data: array of double = { 1.0, 2.0, 3.0, 4.0 }
var m = Gsl.MatrixView.array(a_data, 4, 4)
var b = Gsl.VectorView.array(b_data)
var x = new Gsl.Vector(4)
s:int = 0
var p = new Gsl.Permutation(4)
Gsl.LinAlg.LU_decomp(m, p, out s)
Gsl.LinAlg.LU_solve(m, p, b, x)
print "x =\n%g", x
which should be correct as per this reference.
But it fails:
LA.gs:12.28-12.32: error: syntax error, expected identifier
var m = Gsl.MatrixView.array(a_data, 4, 4)
^^^^^
LA.gs:13.28-13.32: error: syntax error, expected identifier
var b = Gsl.VectorView.array(b_data)
^^^^^
Compilation failed: 2 error(s), 0 warning(s)
So, could anyone please explain to me what that particular error means? What is an identifier in the context of Genie?
And what is the correct way to call such a static method in Genie?
array is a reserved keyword, so you have to add an @-sign in front of the reserved keyword:
var m = Gsl.MatrixView.@array(a_data, 4, 4)
var b = Gsl.VectorView.@array(b_data)
The print line is also incorrect, I have fixed most problems:
[indent=4]
uses Gsl
init
a_data: array of double = { 0.18, 0.60, 0.57, 0.96,
0.41, 0.24, 0.99, 0.58,
0.14, 0.30, 0.97, 0.66,
0.51, 0.13, 0.19, 0.85 }
b_data: array of double = { 1.0, 2.0, 3.0, 4.0 }
var m = Gsl.MatrixView.@array(a_data, 4, 4)
var b = Gsl.VectorView.@array(b_data)
var x = new Gsl.Vector(4)
s:int = 0
var p = new Gsl.Permutation(4)
Gsl.LinAlg.LU_decomp((Matrix) m, p, out s)
Gsl.LinAlg.LU_solve((Matrix) m, p, (Vector) b.vector, x)
stdout.printf("x =\n")
Vector.fprintf(stdout, x, "%g")
This still doesn't compile, because the LU_
methods need a typecast that I'm not 100% sure how to do.
Update: This code compiles and runs, but I have no idea if its actually correct:
[indent=4]
uses Gsl
init
a_data: array of double = { 0.18, 0.60, 0.57, 0.96,
0.41, 0.24, 0.99, 0.58,
0.14, 0.30, 0.97, 0.66,
0.51, 0.13, 0.19, 0.85 }
b_data: array of double = { 1.0, 2.0, 3.0, 4.0 }
var m = Gsl.MatrixView.@array(a_data, 4, 4)
var b = Gsl.VectorView.@array(b_data)
var x = new Gsl.Vector(4)
s:int = 0
var p = new Gsl.Permutation(4)
mp : MatrixView* = &m
vp : Vector** = &(b.vector)
Gsl.LinAlg.LU_decomp((Matrix) mp, p, out s)
Gsl.LinAlg.LU_solve((Matrix) mp, p, (Vector) vp, x)
stdout.printf("x =\n")
Vector.fprintf(stdout, x, "%g")