Loading data from external file to InsightEdge when the grid is configured with RocksDB (MemoryXtend).
Some of the data have same identifier.
When the RDD gets saved to the grid, by using saveToGrid, the operation fails on "java.lang.RuntimeException: Duplicate uid in bulk-info.."
The fix is applied in the proxy (GigaSpace) that in case the grid is configured with RocksDB and we are running InsightEdge then to allow such operation by removing duplicates in the client side (proxy).
This behavior can be turned on and off using the space property: space-config.engine.blobstore.rocksdb.allow_duplicate_uids.
previous iterations back in Sep-2019 :
AA/Casper is running Memoryxtend (SSD) on Kubernetes.
They store data as a batch of multiple entries (RDD.SaveToGrid one single save).
When all in RAM on-heap it works smoothly, but with MX an exception is thrown “Duplicate uid error”.
In our documentation on WriteMultiple we clearly stated “Verify that duplicated entries (with the same ID) do not appear as part of the passed array”
Same for RDD.SaveToGrid, when MX is configured?
Here’s the exception:
StackTrace: java.lang.RuntimeException: Duplicate uid in bulkinfo, uid=87189690^46^
In MX there is a mechanism that translates write-multiple/updatemultiple/takemultiple to bulk operations over the MX in order to improve performance. This mechanism does not accept duplicate occurrence of same entry in a bulk.
If Rdd.savetogrid() creates multiple space ops with same entry (we should verify it with re-creation), we have 2 options:
Currently disable MX bulks (it’s a property that can be set) this will impact performance but the problem will disappear
Fix the issue in MX after recreating it-this is an rnd effort
For option 1 you can disabl the space property: space-config.engine.blobstore_use_bulks,
when the space comes up you can see if it took in the log:
2019-09-15 14:10:42,637 INFO [com.gigaspaces.core.config] - Blob Store Cache size [ 186470 ]
2019-09-15 14:10:42,695 INFO [com.gigaspaces.container] - Starting space...
2019-09-15 14:10:42,709 INFO [com.gigaspaces.container] - Creation of an embedded Lookup Service was skipped - another Lookup Service is already active in this process
2019-09-15 14:10:43,115 INFO [com.gigaspaces.cache] - useBlobStoreBulks=true
2019-09-15 14:10:43,115 INFO [com.gigaspaces.cache] - useBlobStorePrefetch=false
2019-09-15 14:10:43,116 INFO [com.gigaspaces.cache] - useBlobStoreReplicationBackupBulk=false
2019-09-15 14:10:43,130 INFO [com.gigaspaces.cache] - created blob-store handler class=com.gigaspaces.blobstore.rocksdb.RocksDBBlobStoreHandler persistentBlobStore=true
2019-09-15 14:10:43,305 INFO [com.gigaspaces.cache] - BlobStore space data internal cache size=186470
This case escalated by AA , which cannot well utilize their IE .
bypassing with 'disable bulk ' is not an option here , and we have to go with the plan  as mentioned back in Sep by Yechiel .
We can alocate test and reproduction from CSM group , if needed . but it looks like that has been done already and a known limitation as mentioned here :
please elaborate from your side .
The other group that originally reported the issue decided not to use MemoryXtend. I am not sure of the exact reason though.
Is there a task to document this space config and limitations?