I am hitting one issue. I have to store a list in azure table. What I am doing is serializing it to a string and storing it. Now when I am reading that row, there are instances where I didn't get the updated list. As a result post reading if we add any item to list, the older one gets lost. Has anyone encountered this issue before and what is the solution for this.
To elaborate lets say we have 3 classes where C extends B and B extends A. Now any object of B and C are also object of A. Now lets say that we maintain all the object of A (including B and c) as a list in one column of Azure Table. current state of objects as [A',B',C',A''] Issue happens when we have multiple application server. Now Server1 wants to add instance of B into the list by taking current list e.g.[A',B',C',A''] appends new instance lets say B'' to make it [A',B',C',A'', B''] At the same time Server2 gets list as [A',B',C',A''] and appends C'' to make [[A',B',C',A''],C'']. Now the information about object B'' is lost. How to mitigate this kind of issue.
I am usign dot net client library. Below is the code for same
TableOperation retrieveTypedInstancesOperation = TableOperation.Retrieve<TypedInstancesRow>(partitionKey, RowKey);
TableResult retrievedTypedInstancesResult = instancesTable.Execute(retrieveTypedInstancesOperation);
List<string> instanceLists = new List<string>();
if (retrievedTypedInstancesResult.Result != null)
{
string instances = ((TypedInstancesRow)retrievedTypedInstancesResult.Result).Instances;
if (!String.IsNullOrEmpty(instances)) instanceLists = JsonConvert.DeserializeObject<List<string>>(instances);
}
instanceLists.Add(InstanceId);
hierarchy.PartitionKey = partitionKey;
hierarchy.RowKey =rowKey;
hierarchy.Instances = JsonConvert.SerializeObject(instanceLists);
TableOperation insertOperation = TableOperation.InsertOrReplace(hierarchy);
instancesTable.Execute(insertOperation);
Azure Table storage uses the ETag property to manage this scenario.
You haven't provided your code so we don't know what client library you are using. So a summary answer to your issue is:
If you use a Replace
(Update
) operation this requires an ETag value which is obtained when reading the record. If the ETag value has changed between the original read and the update then the update will fail. Note though this behaviour can be overridden if a wilcard '*' value is supplied as the ETag value for the update.
If you use an InsertOrReplace
operation then the record will be replaced regardless of any changes between the original read and the current operation. It's my assumption this is the operation you are using.
If you amend your code to use a Replace operation your above example would become:
Server 1 & 2 read the table [A',B',C',A''] and the entity returned to both has the same ETag value.
Server 1 issues a Replace operation including the original ETag; this is successful and modifies the field to [A',B',C',A'', B''] and changes the ETag value in the table.
Server 2 issues a Replace operation including the original ETag it read, this fails as the ETag value is now out of date.
How this failure is handled depends on your requirement but potentially you would then need to notify your client user that the operation has failed due to a change made by another user; re-read the latest record state ([A',B',C',A'', B''] with new ETag); user can then modify (to [A',B',C',A'', B'', C''] as I interpret is your requirement).