we’re using for exporting data the “handler … read next …”-Statement (later should be a real handlersocket-connection).
Because of the size of the table and the small amount of ram in relation, we do this with a limit.
All works well! Even a machine/worker with small amount of ram will do this!
The first access is always fine!
The second (sometimes the 3. or 4. - seems to depend on the limit size?) will force a problem:
The node gets a long “system lock” while doing nothing.
After a while the client gets a “finished” with no error - just like all data transfered!
If we don’t use limit (in detail: limit = 100.000,000.000 so it’s like infinite because using no limit means a limit of “1”!!!) this all will work on a client with ram sized like the the table data (without overhead). But this can’t be a good solution because of data could grow!
Could anyone give me a hint where to search?
Also an answer like “this is impossible… on a cluster…” would be great!