I have to keep multi type struct in slice and seed them. I took with variadic parameter of interface type and foreach them. If I call the method of interface it works, but when I trying to reach to struct I can't. How can I solve that?
Note: Seed() method return the file name of datas.
The Interface:
type Seeder interface {
Seed() string
}
Method:
func (AirportCodes) Seed() string {
return "airport_codes.json"
}
SeederSlice:
seederModelList = []globals.Seeder{
m.AirportCodes{},
m.Term{},
}
And the last one, SeedSchema function:
func (db *Database) SeedSchema(models ...globals.Seeder) error {
var (
subjects []globals.Seeder
fileByte []byte
err error
// tempMember map[string]interface{}
)
if len(models) == 0 {
subjects = seederModelList
} else {
subjects = models
}
for _, model := range subjects {
fileName := model.Seed()
fmt.Printf("%+v\n", model)
if fileByte, err = os.ReadFile("db/seeds/" + fileName); err != nil {
fmt.Println("asd", err)
// return err
}
if err = json.Unmarshal(fileByte, &model); err != nil {
fmt.Println("dsa", err)
// return err
}
modelType := reflect.TypeOf(model).Elem()
modelPtr2 := reflect.New(modelType)
fmt.Printf("%s\n", modelPtr2)
}
return nil
}
I can reach exact model but can't create a member and seed.
After some back and forth in the comments, I'll just post this minimal answer here. It's by no means a definitive "this is what you do" type answer, but I hope this can at least provide you with enough information to get you started. To get to this point, I've made a couple of assumptions based on the snippets of code you've provided, and I'm assuming you want to seed the DB through a command of sorts (e.g. your_bin seed
). That means the following assumptions have been made:
AirportCodes
and the like)Seed()
method, returning a .json
file name)[{"seed": "data"}, {"more": "data"}]
.OK, so let's start by moving all of the JSON files in a predictable location. In a sizeable, real world application you'd use something like XDG base path, but for the sake of brevity, let's assume you're running this in a scratch container from /
and all relevant assets have been copied in to said container.
It'd make sense to have all seed files in the base path under a seed_data
directory. Each file contains the seed data for a specific table, and therefore all the data within a file maps neatly onto a single model. Let's ignore relational data for the time being. We'll just assume that, for now, the data in these files is at least internally consistent, and any X-to-X
relational data will have to right ID fields allowing for JOIN's and the like.
So we have our models, and the data in JSON files. Now we can just create a slice of said models, making sure that data that you want/need to be present before other data is inserted is represented as a higher entry (lower index) than the other. Kind of like this:
seederModelList = []globals.Seeder{
m.AirportCodes{}, // seeds before Term
m.Term{}, // seeds after AirportCodes
}
But instead or returning the file name from this Seed
method, why not pass in the connection and have the model handle its own data like this:
func (_ AirportCodes) Seed(db *gorm.DB) error {
// we know what file this model uses
data, err := os.ReadFile("seed_data/airport_codes.json")
if err != nil {
return err
}
// we have the data, we can unmarshal it as AirportCode instances
codes := []*AirportCodes{}
if err := json.Unmarshal(data, &codes); err != nil {
return err
}
// now INSERT, UPDATE, or UPSERT:
db.Clauses(clause.OnConflict{
UpdateAll: true,
}).Create(&codes)
}
Do the same for other models, like Terms
:
func (_ Terms) Seed(db *gorm.DB) error {
// we know what file this model uses
data, err := os.ReadFile("seed_data/terms.json")
if err != nil {
return err
}
// we have the data, we can unmarshal it as Terms instances
terms := []*Terms{}
if err := json.Unmarshal(data, &terms); err != nil {
return err
}
// now INSERT, UPDATE, or UPSERT:
return db.Clauses(clause.OnConflict{
UpdateAll: true,
}).Create(&terms)
}
Of course, this does result in a bit of a mess considering we have DB access in a model, which should really be just a DTO if you ask me. This also leaves a lot to be desired in terms of error handling, but the basic gist of it would be this:
func main() {
db, _ := gorm.Open(mysql.Open(dsn), &gorm.Config{}) // omitted error handling for brevity
seeds := []interface{
Seed(*gorm.DB) error
}{
model.AirportCodes{},
model.Terms{},
// etc...
}
for _, m := range seeds {
if err := m.Seed(db); err != nil {
panic(err)
}
}
db.Close()
}
OK, so this should get us started, but let's just move this all into something a bit nicer by:
So as mentioned earlier, I'm assuming you have something like repositories to handle DB interactions in a separate package. Rather than calling Seed
on the model, and passing the DB connection into those, we should instead rely on our repositories:
db, _ := gorm.Open() // same as before
acs := repo.NewAirportCodes(db) // pass in connection
tms := repo.NewTerms(db) // again...
Now our model can still return the JSON file name, or we can have that as a const
in the repos. At this point, it doesn't really matter. The main thing is, we can have the actual inserting of data done in the repositories.
You can, if you want, change your seed slice thing to something like this:
calls := []func() error{
acs.Seed, // assuming your repo has a Seed function that does what it's supposed to do
tms.Seed,
}
Then perform all the seeding in a loop:
for _, c := range calls {
if err := c(); err != nil {
panic(err)
}
}
Now, this just leaves us with the issue of the transaction stuff. Thankfully, gorm makes this really rather simple:
db, _ := gorm.Open()
db.Transaction(func(tx *gorm.DB) error {
acs := repo.NewAirportCodes(tx) // create repo's, but use TX for connection
if err := acs.Seed(); err != nil {
return err // returning an error will automatically rollback the transaction
}
tms := repo.NewTerms(tx)
if err := tms.Seed(); err != nil {
return err
}
return nil // commit transaction
})
There's a lot more you can fiddle with here like creating batches of related data that can be committed separately, you can add more precise error handling and more informative logging, handle conflicts better (distinguish between CREATE and UPDATE etc...). Above all else, though, something worth keeping in mind:
I have to confess that I've not dealt with gorm in quite some time, but IIRC, you can have the tables be auto-migrated if the model changes, and run either custom go code and or SQL files on startup which can be used, rather easily, to seed the data. Might be worth looking at the feasibility of that...