Problem with TMySQLStatement and multiple inserts

Dear ROOTers,

I use ROOT to access a mysql database. When creating prepared statements that insert several rows at once, I come accross a problem where the BLOBs are empty.

An example is attached that shows the exact problem. I build one or several statements (depends on nbBlocks) that should insert x>10 rows (depends on nbObjPerBlock), each row made of a varchar, a timestamp and a blob. I end up with a query such as

REPLACE INTO test (moname, updatetime, data) VALUES (?, ?, ?), (?, ?, ?), (? ,?, ?)...

The example is out of context and therefore look a bit awkward (prepared statement used only once…), sorry about that. Nevertheless if you create the table as described in the source and run the example, you will see that the 10 first rows are properly inserted but not the following ones even though the names and dates are still ok. Only the blobs are affected. See here :

mysql> select moname, updatetime, octet_length(data) from test order by moname;
| moname | updatetime          | octet_length(data) |
| test0  | 2009-01-01 01:01:01 |               1052 |
| test1  | 2009-01-01 01:01:01 |               1052 |
| test10 | 2009-01-01 01:01:01 |                  0 |
| test11 | 2009-01-01 01:01:01 |                  0 |
| test12 | 2009-01-01 01:01:01 |                  0 |
| test13 | 2009-01-01 01:01:01 |               1052 |
| test14 | 2009-01-01 01:01:01 |               1052 |
| test15 | 2009-01-01 01:01:01 |               1052 |
| test16 | 2009-01-01 01:01:01 |               1052 |
| test17 | 2009-01-01 01:01:01 |               1052 |

What is even more strange is that the number of rows it can successfully insert depends on the program and not on the size of the objects. In the framework AMORE (ALICE collaboration) we can see this happen for the rows > 47 but this also depends on the user code…

Finally, if the example seems to work on your computer, add a loop and run it many times, and increase the number of rows in the insert. This happened to me.

I really don’t see if the bug is in my code or (sorry) in ROOT.
Any help is more than welcome,
testMysql.C (3.1 KB)

Hi again,

I guess that many people are on hollidays during this period… :slight_smile:
but could someone check this ? Or tell me when somebody will have a look at this ?

Best regards,

Hi, Barth

You are right - I just back from my holidays.

How many statement parameters at all do you using? I am not sure if there are any limitations but very well can be.

Can you try not to insert many rows at one call but do it in several iterations?
More info how it can be done you can find in TSQLStatement docu -


Hi Sergey,

There are 3 parameters per inserted row. If you look at the example I sent, you will see that you can change the variable nbObjPerBlock to vary the number of rows inserted by query and the number of queries. Under 10 rows per query (30 parameters), it works… in this example. In my software, the limit is usually 47 rows of 8 parameters. This limit however varies depending on the user’s plugin… This indicates that the problem is not a clear limit imposed by MySQL (what I also checked in the documentation, the only limit is the max_packet_size). By the way, the problem happens on all machines and platforms I worked with (all linux, but I tested on many machines with different cpu’s and SLC versions and MySQL version).

Finally, I now tried to use directly the C api in my software and I got rid of the problem. I can try to convert the example I sent to the C MySQL api to show this if you want.

I would greatly prefer to continue using the ROOT Mysql api than the C one which is far more complicated.

Anyway, take your time to have a look at this and at the example I sent. Coming back from hollidays you certainly have many things to do.

Best regards,

Hi, Barth

Can you provide your C-based code that I can compare it with code in ROOT classes? Can you anyhow try code as I suppose. It should look like this:

  TSQLStatement* stmt = serv->Statement("REPLACE INTO test (moname, updatetime, data) VALUES (?, ?, ?)", 100);

  for (int object = block*nbObjPerBlock; object < block * nbObjPerBlock + nbObjPerBlock ; object++) {
    if (!stmt->NextIteration()) {
       cout << "Error" << endl; return;

    ostringstream s;
    s << "test" << object;
    cout << "obj " << s.str() << endl;
    stmt->SetString(0, s.str().c_str());
    stmt->SetTimestamp(1, 2009, 1, 1, 1, 1, 1);
    mess.WriteObjectAny(&mo, mo.IsA());
    stmt->SetBinary(2, mess.Buffer(), mess.Length() + sizeof(UInt_t), 0x2fffff6);

     delete stmt;

My doubt is also constant 0x2fffff6 in SetBinary call. I suspect, this is your main problem.

This parameter means size of internal buffer for storing your binary data. Here you allocate 50Mb buffer for each binary parameterl. Do you really need it? In your case (with many rows insterted in single statement) it should be just a size of your binary data (mess.Length() + sizeof(UInt_t)).
Can you also try to reduce it?

Generally, it is also main reason why I propose to use code shown here - you will not allocate so many big binary buffer as you doing it now.


Hi Sergey,

Thanks for your prompt reply.

The problem was indeed the hard-coded max value provided to SetBinary.
I tested the example I sent with mess.Length() + sizeof(UInt_t) and it worked fine. I will now try my software code, but I am optimistice.
This hard-coded value came from this other thread ( where we discusses another problem. I didn’t question the hard-coded value, just increased it and used it. That was a mistake :slight_smile:

Concerning the code you propose, it is what I previously had and I agree that it is the normal usage of a prepared statement. However, I made some benchmarks some time ago and it was much much faster to make 4 inserts of 50 rows than 200 singles (the way you showed). This was surprising to me. Anyway, I decided to use this multiple inserts instead of single ones.

Thanks a lot for your help,