When using the MySQL NDB Cluster distributed database, the following general problem has arisen: To insert a new Record / Row, the last row in the database should be read, see if it is compatible with the row to be inserted and then insert the new row.
However, the problem arises that in the meantime another instance also runs through the same process and then at the end both new rows have checked to the same last record to be compatible and are inserted.
Is there a way to block this with a locking process?
So that one instance gets the last row/ record via JDBC in Java and locks this row for reading, then checks the object and then insert the new record and now a second instance can run through the same process?
Thanks for your answers.
However, the problem arises that in the meantime another instance also runs through the same process and then at the end both new rows have checked to the same last record to be compatible and are inserted.
Is there a way to block this with a locking process?
So that one instance gets the last row/ record via JDBC in Java and locks this row for reading, then checks the object and then insert the new record and now a second instance can run through the same process?
Thanks for your answers.