These days have studied the original code of EcPerf. This afternoon, I saw the loading of the data. He is implemented using Executebatch. He is a bulk execution SQL statement. So I wrote a test program to test the performance of ExecuteBatch.
I use the mysql database, insert 100,000 records, divided into three partial tests, one is directly inserted, the second is to use Executebatch, three is to use Executebatch with Executebatch, then give me source code and test results.
Connection conn = null; try {class.forname ("org.gjt.mm.mysql.driver); conn = drivermanager.getConnection (" JDBC: MySQL: // Localhost: 3306 / Test "," root "," QWE123 "); Conn.setautocommit (false); preparedStatement ps = conn.preparestatement (" INSERT INTO TB_TEST VALUES (?,?) "); String Date = new java.util.date (). Tostring (); long L = SYSTEM .currenttimemillis (); for (int i = 0; i <100000; i ) {ps.setstring (1, "axman"); ps.setstring (2, date); ps.executeUpdate (); conn.commit () } Ps.executebatch (); system.wout.println (system.currenttimemillis () - l); long l1 = system.currenttimemillis (); for (int i = 0; i <100000; i ) {ps.setstring 1, "axman"); ps.setstring (2, date); ps.addbatch (); conn.commit ();} ps.executebatch (); system. /t.println (system.currenttimemillis (); INT A [] = new int [100]; Date b [] = new date [=; int J = 0; long L2 = system.currenttimemillis (); for (int i = 0; i <100000; i ) {ps.setstring (1, "axman"); Ps.setString (2, date); ps.addbatch (); if ( j == 100) {j = 0; ps.executebatch (); conn.commit ();}} // end for system.out .println (system.currenttimemillis () - l2);} catch (exception ex) {ex.printstacktrace (system.out);} finally {Try {conn.close ();
} Catch (Exception EX) {}} test results are:
415904336325877 Remove the priority in the source code. Automatically submit 288913195625517 seems to have not entered the role at all, saying that "the real mysql driver only moves the original external loop to Executebatch (), or every update and the database is interactive" . If this is true, it is not surprising. Other databases I have not been, I don't know if there is the same problem!