performance - Changing Large MySQL InnoDB Tables -


Adding a new column or adding a new index, in MySQL, with more than 1 million rows for large inodb tables, hours and It may take days. In these two cases, what is the best way to increase performance on large inodab tables? Looking at more memory, configuration (for example, or increasing), or some sort of move? Instead of changing the table directly, one can create a new one, change it, and copy the old data in a new way, which is useful for it and:

  create tab Tablename_tmp LIKE tablename; Optional Table Tablename_tmp ADD Field Name Field Type; Select INSERT in Tablename_tmp * from Tablename; Optional table Tablename RENAME tablename_old; ALTER TABLE tablename_tmp RENAME tablename;  

Is also recommended for Inodb table, or is it okay what the optional table command does?

Edit 2016: We recently (August 2016) Code> gh-ost , has modified my answer to reflect it.

Today there are several tools that allow you to create online table online for MySQL. Edit: 2016 : GitHub's Trigger-free Schema Migration Tool (Disclaimer: I'm the author of this tool)

  • OpenMarks-Kit (Disclaimer: I am the author of this tool), as part of
  • , Percona Toolkit
  • As part of Facebook
  • Let's consider "General" "Alternative Tab":

    For a large table for ALTER It will take a long time. innodb_buffer_pool_size is important, and therefore there are other variables, but they are all negligible on a large table. It just takes time.

    What makes MySQL to ALTER , to create a new table with the new table, copy all the rows, then switch the table during this time completely Is off from

    Assume your suggestion:

    This will most probably display the worst of all the options. Why is it like this? Because you are using an INDB table, select the tablanem_tmm * from tablanem for a transaction makes a huge transaction this normal optional table will make even more weight.

    In addition to this, you have to close your application so that it is not written ( INSERT , delete , UPDATE ) On your table if this happens - your entire transaction is useless.

    What online tools do they provide

    Devices do not work equally. However, the basics are shared:

    • They create a "shadow" table with the changing schema
    • They trigger the distribution table from the original table Make and use
    • They gradually copy all the rows from your table into the shadow table they do this in chunks: say, 1,000 lines at a time
    • They do all of the above, whenever you still use the original table And are able to manipulate.
    • When they are satisfied, they have a RENAME .

    OpenKark-Kit tool has been in use for 3.5 years. Perkona equipment is a few months old, but it was possibly tested in the past. Facebook's device is said to work well for Facebook, but does not provide a general solution for the average user. I have not used it myself.

    Edit 2016: gh-ost is a trigger-free solution, which significantly reduces master list-load on master, with normal load It is auditable, controllable, testable to decoding the migration lip load. We have developed it internally on GitHub and have released it as open source; We are today distributing all our production migration through gh-ost see more

    Each device has its own limitations, carefully view the documentation.

    Conservative methods

    Conservative method is to use an active-inactive master -Master Replication, on standby (inactive) server, ALTER , then on roles Switch and use ALTER when used as an active server, is now disabled. This is also a good choice, but requires an additional server, and a deep knowledge of replication.


    Comments

    Popular posts from this blog

    Python SQLAlchemy:AttributeError: Neither 'Column' object nor 'Comparator' object has an attribute 'schema' -

    java - How not to audit a join table and related entities using Hibernate Envers? -

    mongodb - CakePHP paginator ignoring order, but only for certain values -