Wednesday, December 19, 2018

Exchange Partition: An Archiving Strategy


 As applications mature, tables tend to grow infinitely, specially the time series ones. They have tens of millions of rows and run up to a few TBs in size. Even though the tables have so much data, applications only need to access data that was recently saved. (Say within, the last year or so). A trivial task of adding a new index or changing the type of a column for such a humungous table, becomes a very painful task. I was charged with such a task recently and that got me thinking about archiving strategies.

Archiving is a good practice to keep the data to a required working set, while at the same time, maintaining the data that might be needed for legal or compliance reasons, or for certain less frequent workflows in the application, like looking at the 10 year purchase history of a customer.

Though archiving seems like a database problem, it cannot be done in vacuum. It needs buy in from application, legal and compliance to know what the archiving boundaries are, what are the access patterns and availability requirements for the archived data.

But solving for the database problem is something right up my alley, so I thought of testing a few approaches. One of my favorites for the ease of use is the "EXCHANGE PARTITION" feature for 5.7.

In MySQL 5.7, it is possible to exchange a table partition or subpartition with a table using –
ALTER TABLE pt EXCHANGE PARTITION p WITH TABLE nt

where
pt is the partitioned table
p is the partition of pt to be exchanged
and nt is a non partitioned table.

Privileges needed for the statement is the combination of ALTER TABLE and
TRUNCATE TABLE on both tables.

This provides a great opportunity for archival of partitioned tables.

Consider an online retail store, that logs its invoices or orders in a table. Here is an over simplified invoice table for the store.

CREATE TABLE `invoice` (
  `id` int(10) unsigned NOT NULL AUTO_INCREMENT,
  `invoice_num` int(10) unsigned NOT NULL,
  `stockcode` int(10) unsigned NOT NULL,
  `invoice_date` datetime NOT NULL,
  `price` decimal(10,2) DEFAULT NULL,
  PRIMARY KEY (`id`,`invoice_date`)
) ENGINE=InnoDB
PARTITION BY RANGE (YEAR(invoice_date))
(PARTITION p2009 VALUES LESS THAN (2010) ENGINE = InnoDB,
 PARTITION p2010 VALUES LESS THAN (2011) ENGINE = InnoDB,
 PARTITION p2011 VALUES LESS THAN (2012) ENGINE = InnoDB,
 PARTITION p2012 VALUES LESS THAN (2013) ENGINE = InnoDB,
 PARTITION p2013 VALUES LESS THAN (2014) ENGINE = InnoDB,
 PARTITION p2014 VALUES LESS THAN (2015) ENGINE = InnoDB,
 PARTITION p2015 VALUES LESS THAN (2016) ENGINE = InnoDB,
 PARTITION p2016 VALUES LESS THAN (2017) ENGINE = InnoDB,
 PARTITION p2017 VALUES LESS THAN (2018) ENGINE = InnoDB,
 PARTITION p2018 VALUES LESS THAN (2019) ENGINE = InnoDB,
 PARTITION pMAX VALUES LESS THAN MAXVALUE ENGINE = InnoDB)

The queries on this table only need data from a year ago, to be available for queries, however for compliance and other non-frequent workflow we need to maintain older data. The EXCHANGE PARTITION gives us an opportunity to archive partitions, with quick DDL operations.

Consider the need to archive 2010 data, which is in p2010 partition.

mysql> select count(*) from invoice PARTITION (p2010);
+----------+
| count(*) |
+----------+
|  1111215 |
+----------+
1 row in set (0.38 sec)

To use EXCHANGE PARTITION the table to be partitioned pt and the non-partitioned table that the data will be archived into nt need to meet a few requirements - 
-       The non-partitioned table needs to have the same structure as the partitioned table.
-       It cannot be a temporary table
-       It cannot have foreign keys, or have any foreign keys that refer to it.
-       There are no rows in nt that lie beyond the boundaries of the partition p.


Lets create the table invoice_2010 which will archive all invoices from 2010.

mysql> create table invoice_2010 like invoice;
Query OK, 0 rows affected (0.44 sec)

mysql> alter table invoice_2010 remove partitioning;
Query OK, 0 rows affected (0.38 sec)
Records: 0  Duplicates: 0  Warnings: 0

mysql> show create table invoice_2010 \G
 Table: invoice_2010
Create Table: CREATE TABLE `invoice_2010` (
  `id` int(10) unsigned NOT NULL AUTO_INCREMENT,
  `invoice_num` int(10) unsigned NOT NULL,
  `stockcode` int(10) unsigned NOT NULL,
  `invoice_date` datetime NOT NULL,
  `price` decimal(10,2) DEFAULT NULL,
  PRIMARY KEY (`id`,`invoice_date`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8
1 row in set (0.00 sec)

Archiving the partition using EXCHANGE partition, can be done as –

mysql> select count(*) from invoice PARTITION (p2010);
+----------+
| count(*) |
+----------+
|  1111215 |
+----------+
1 row in set (0.38 sec)

mysql> ALTER TABLE  invoice EXCHANGE PARTITION p2010 WITH TABLE invoice_2010;
Query OK, 0 rows affected (0.14 sec)

mysql> select count(*) from invoice PARTITION (p2010);
+----------+
| count(*) |
+----------+
|        0 |
+----------+
1 row in set (0.00 sec)

mysql> select count(*) from invoice_2010;
+----------+
| count(*) |
+----------+
|  1111215 |
+----------+
1 row in set (0.21 sec)

If needed the partition can also be moved back.

mysql> ALTER TABLE  invoice EXCHANGE PARTITION p2010 WITH TABLE invoice_2010;
Query OK, 0 rows affected (0.73 sec)

mysql> select count(*) from invoice_2010;
+----------+
| count(*) |
+----------+
|        0 |
+----------+
1 row in set (0.00 sec)

mysql> select count(*) from invoice PARTITION (p2010);
+----------+
| count(*) |
+----------+
|  1111215 |
+----------+
1 row in set (0.23 sec)

If any of the rows in the non partitioned table violate the partitioning rule for the partition it is being exchanged with, you would get an error.

mysql> update invoice_2010 set invoice_date='2012-01-01' where id=10000009;
Query OK, 1 row affected (0.17 sec)
Rows matched: 1  Changed: 1  Warnings: 0

mysql> ALTER TABLE  invoice EXCHANGE PARTITION p2010 WITH TABLE invoice_2010;
ERROR 1737 (HY000): Found a row that does not match the partition


In its current implementation a row by row validation, does a full table scan on the non partitioned table to evaluate if there is a row in the table that violates the partitioning rule. From the open worklog, it seems like there are plans to have the command use index instead of a full table scan but it isn't implemented yet. A workaround to this is using WITHOUT VALIDATION.

To avoid time consuming validation when exchanging a partition with a table that has many rows, it is possible to skip the row-by-row validation step by appending WITHOUT VALIDATION to the ALTER TABLE ... EXCHANGE PARTITION statement. However with this the onus lies on the engineer to verify that no partitioning rules are getting violated.

mysql> ALTER TABLE  invoice EXCHANGE PARTITION p2010 WITH TABLE invoice_2010 WITHOUT VALIDATION;
Query OK, 0 rows affected (0.13 sec)

From a 20,000 ft view, the high level architecture for this involves
-       Taking a upgradable metadata lock on both tables
-       Verifying that the metadata matches (i.e both tables have the same structure)
-       If WITHOUT VALIDATION is not used, verifying data in the non partitioned table
-       Upgrading to a exclusive metadata lock for both tables
-       Rename non-partitioned table to partition and partition to non-partitioned table.
-       Releasing the metadata locks


It would have been nice  if it was possible to append to the non partitioned table, rather than a exchange, but my guess is then it wouldn't be a metadata operation. The same can be achieved by an exchange to an intermediate table, followed by copy into the archive table.

As pertains to previous example, this would involve creating a table invoice_2010_intermediate. Exchanging p2010 from invoice table with invoice_2010_intermediate and doing a copy from invoice_2010_intermediate to invoice_2010.

But for what it does, I think it is a delightful approach to archiving data that you still need on your database server, for reading and archiving, but it doesn't change.

Resources :
Script for populating data into invoice table - https://gist.github.com/dontstopbelieveing/3c42338c8f5c756a526ab2f7bef5525e
-->

Friday, January 19, 2018

Crash Recovery : Scratching the surface : Part 3

In the last post we saw how the log is written during the operation of a database system. In this post we will go over the actual recovery mechanism and how the log is used.

Why recovery?

As we discussed in our first post on this topic, a recovery algorithm ensures that all the changes that were part of a transaction that was not committed are rolled back and all the changes that are part of a transaction that was committed persist even after a crash, restart or error.


What exactly happens in recovery?

There are two processes that are essential for recovery that happen in the database system on an ongoing basis Checkpointing and Write-Ahead Logging.


What is a checkpoint?

A checkpoint is a snapshot of the DBMS state. By taking checkpoints periodically the work done during a recovery can be reduced. A begin_checkpoint record is written to indicate start of checkpoint. An end_checkpoint record is written consisting of transaction table and dirty page table and appended to the log. After the checkpoint process is complete a special master record is written containing LSN of the begin_checkpoint log record. When system comes back from the crash the restart process begins by locating the most recent checkpoint.

What is write ahead logging?

We saw this already in the last post, what it essentially means is that any change to database is first recorded in the log and record in the log must be written to stable storage (disk) before the change to the database is written to the disk.

Recovery proceeds in three phases - Analysis, Redo and Undo.

During Analysis the system determines which transactions were committed and which were not and essentially collects information about all transactions that were active (not committed or rolled back) at the time of the crash. Redo retraces or re-does all actions of the system and brings it back to the state that it was at the time of the crash. This is followed by the Undo stage
We will take a look at what happens in each phase in detail next.
Lets continue from the example in our last post,
LSN
prevLSN
transID
type
PageID
Length
Offset
Before
After
1
-
T1
update
P500
3
21
ABC
DEF
2
-
T2
update
P600
3

HIJ
KLM
3
2
T2
update
P500
3
20
GDE
QRS
4
1
T1
update
P505
3

TUV
WXY
5
3
T2
commit
-
-
-
-
-

Let us assume that T1 changes NOP to ABC on page P700, writes to the log record.
LSN
prevLSN
transID
type
PageID
Length
Offset
Before
After
1
-
T1
update
P500
3
21
ABC
DEF
2
-
T2
update
P600
3

HIJ
KLM
3
2
T2
update
P500
3
20
GDE
QRS
4
1
T1
update
P505
3

TUV
WXY
5
3
T2
commit
-
-
-
-
-
6
4
T1
update
P700
3

NOP
ABC
Let us look at a scenario when the system crashes before the last log record is written to stable storage.

What exactly happens in the Analysis phase?

The analysis phase begins by examining the most recent checkpoint, and initializing the dirty page table and transaction table to copies of those structures in the end_checkpoint records. The analysis then scans the log forward till it reaches the end of the log.
  • If a log record other than an end record for T is encountered an entry for T is added to transaction table if not already there. Entry for T is modified and LastLSN field of T is set to this record.
  • If an end log record for transaction T is encountered, T is removed from transaction table because it is no longer active.
  • If the log record is a commit record the status is set to commit, C otherwise to U indicating it needs to be undone
  • If a redo log record affecting page P is encountered and P is not in dirty page table an entry is added to dirty page table
At end of analysis table transaction table contains all transactions that were active at time of the crash.
In our scenario, let us assume the checkpoint was done at the beginning when both the transaction and dirty page table were dirty and analysis starts from the first log record.

Looking at LSN 1

1
-
T1
update
P500
3
21
ABC
DEF

Transaction Table
transID
lastLSN
1
1
lastLSN - LSN of the most recent log record belonging to the transaction.
Dirty Page Table
pageID
recLSN
P500
1
recLSN - LSN of the first log record that caused the page to become dirty


Looking at LSN 2
2
-
T2
update
P600
3

HIJ
KLM


Transaction Table
transID
lastLSN
1
1
2
2

Dirty Page Table
pageID
recLSN
P500
1
P600
2

Looking at LSN 3
3
2
T2
update
P500
3
20
GDE
QRS

Transaction Table
transID
lastLSN
1
1
2
3

Dirty Page Table
pageID
recLSN
P500
1
P600
2
Looking at LSN 4
4
1
T1
update
P505
3

TUV
WXY

Transaction Table
transID
lastLSN
1
4
2
3

Dirty Page Table

pageID
recLSN
P500
1
P600
2


Looking at LSN 5
5
3
T2
commit
-
-
-
-
-

Transaction Table
transID
lastLSN
1
4

Dirty Page Table
pageID
recLSN
P500
1
P600
2
P505
4

Since the system crashed before log record 6 could be written to stable storage, the log record is not read in the Analysis phase at all. This is the state of transaction table and dirty page table at the end of the analysis phase.



What exactly happens in the Redo phase?

During the redo phase the system applies updates of all transactions committed or otherwise. If a transaction was aborted before the crash and its updates were undone, as indicated by compensation log records (CLRs), the actions described in CLRs are also reapplied.
The Redo phase starts with the smallest LSN in the dirty page table that was constructed in the Analysis phase. This LSN refers to the oldest update that may not have been written to the disk prior to the crash. Starting from this LSN, redo scans forward till it reaches the end of the log.
For each log record (update or CLR), redo checks the dirty page table
  • If the page is not in dirty page table the log record is ignored as this means all the changes to the page have been already written to the disk.
  • If the page is in the dirty page table but the recLSN (the LSN that made the page dirty) is greater than LSN of the record being checked the record is ignored. This means that the change was written to disk and a latter LSN made the page dirty.
It then retrieves the page and checks the most recent LSN on the page (pageLSN), if this is greater than or equal to LSN of the record being checked the record is ignored as this means the page already contains the changes from the LSN being checked.
For all other cases, the log action is redone, whether it is an update record or a CLR (records written during rollback / abort) The logged action is reapplied and the pageLSN on the page is set to the LSN of the redone record. No additional log record is written.
Considering the transaction table and the dirty page table at the end of our analysis phase in our example
Transaction Table
transID
lastLSN
1
4

Dirty Page Table
pageID
recLSN
P500
1
P600
2
P505
4

Redo phase starts with smallest LSN is dirty page table which is 1 and scans forward from the log.

Looking at LSN 1
1
-
T1
update
P500
3
21
ABC
DEF
P500 is in the dirty page table, and recLSN which is 1, is equal to the LSN that is being checked. Therefore the system retrieves the page. It checks the pageLSN, which is less than 1 and therefore decides the action must be redone. It changes ABC to DEF.
Looking at LSN 2
2
-
T2
update
P600
3

HIJ
KLM
P600 is in the dirty page table and recLSN is 2, which is equal to the LSN being checked. Therefore the system retrieves the page. It checks pageLSN (3) which is greater than the LSN being checked and therefore does not need to redo the update (Remember, T2 was committed?)

Looking at LSN 3
3
2
T2
update
P500
3
20
GDE
QRS
This is the same case as the previous one and no redo is necessary.

Looking at LSN 4
4
1
T1
update
P505
3

TUV
WXY
P505 is in the dirty page table. The pageLSN is less than the LSN being checked and therefore the update needs to be redone. It changes TUV to WXY

Looking at LSN 5
5
3
T2
commit
-
-
-
-
-
Since this is not an update or CLR record no action needs to be done. At the end of the redo phase, an end record is written for T2.


What exactly happens in the Undo phase?

The aim of the undo phase is to undo the actions of all transactions that were active at the time of the crash, effectively aborting them. The Analysis phase identifies all transactions that were active at the time of the crash, along with their most recent LSN. (lastLSN) All these transactions must be undone in the reverse order of which they appear in the log. Therefore undo starts from the largest i.e. most recent LSN from the transactions to be undone.
For each log record

  • If the record is a CLR and the undoNextLSN values is not null, the undoNextLSN values is added to the logs records to undo. If the undonextLSN is null an end record is written for the transaction because it is completely undone and the CLR record is discarded.
  • If the record is an update a CLR is written and the corresponding action is undone just as if the system were doing a rollback and prevLSN in the update record is added to the set of records to be undone. 
When the set of records to be undone is completely empty the undo phase is complete.

Once the undo phase is complete, the system is said to be “recovered” and can proceed with normal operations.

In our example, the transaction table from Analysis phase is -

Transaction Table
transID
lastLSN
1
4
The undo phase starts from log record with LSN 4 and creates a set of actions to undo

Looking at LSN 4

4
1
T1
update
P505
3

TUV
WXY
WXY is changed to TUV
Set to Undo - {1} which is the prevLSN

A CLR is added

6
4
T1
CLR
P505
3

WXY
TUV
1
where 1 is the undoNextLSN


Looking at LSN 1
1
-
T1
update
P500
3
21
ABC
DEF
DEF is changed to ABC
Set to Undo - {} since prevLSN is null

A CLR is added

7
6
T1
CLR
P500
3

DEF
ABC
-


Since the set of actions to be undone is zero, the undo is complete. T1 is removed from the transaction table. A checkpoint is taken and the system is in a recovered state and can proceed with normal operation.


What happens if a system crashes during crash recovery?

If there is a crash during crash recovery, the system can still do recovery as for every update that was undone, a CLR is written and the system needs to just redo the CLRs. This is why CLRs are an important part of recovery.


In our example if the system crashed after the change from WXY to TUV was done but before the change from DEF to ABC was done, when the system is in recovery state again, it would see the CLR for the WXY to TUV change in the redo phase and redo or repeat the change. The change from DEF to ABC would be done as a part of undo during the latter/second recovery.


In the next and hopefully last post on recovery, I’ll try to look at MySQL logs and source code related to the recovery component and see some of these things in action.






















MySQL storing time

Storing time in databases seems like a daunting task to many. The general principle with storing time in databases is - ALWAYS STORE TIME ...