I would like to test trains usage during grid search and it not clear how to do so.
from trains import Task
Task.init(project_name="project name", task_name='name')
creates an experiment in the demo server and logs all but you can't call init twice no matter the 'task_name' and
from trains import Task
Task.create(project_name="project name", task_name='name')
can be called with different 'task_name' but thus not log any data into the server and creates only 'Draft'.
here is a sample code:
epochs=[160,300]
for epoch in epochs:
model = define_model_run(epoch)
model.fit(x_train,y_train)
score = model.score(...)
my final try was:
epochs=[160,300]
task=Task.init(project_name="demo", task_name='search')
for epoch in epochs:
task.create(project_name="demo", task_name=f'search_{epoch}')
model = define_model_run(epoch)
model.fit(x_train,y_train)
score = model.score(...)
which logs all information under the experiments tab and none under the 'Draft'. I tried the last two hour the read the few documentations provided and reading the source code, but no luck.
any help?
Declaimer: I'm a member of TRAINS team
Yes, that's exactly the answer. The idea is that you always have one main Task, in order to create a new one you need to close the running Task, and re-initialize with a new name. Kudos on solving it so quickly :)
BTW: You can see examples here/and here, showing how to send accuracy logs so it is easier to compare the experiments, especially when running hyper-parameter search.