Sequences – a quick guide and an unexpected quirk
05/03/2015 3 Comments
Sequences on Oracle databases are simple, but there are some basic truths about sequences which need to be understood if you are going to use them, especially on RAC:
- Sequences will have “holes” in them, even if you specify NOCACHE (e.g. if you rollback your transaction).
- Don’t specify NOCACHE. It doesn’t scale.
- Sequences will NOT necessarily be, erm, sequential for your transactions. That will depend upon the COMMIT order and if you are using RAC.
- The higher the CACHE value, the bigger the holes but the more scalable the sequence will be become. You will lose the entire set of unused cache values across instance restart.
- Oracle stores the CACHE value on EACH RAC node, so for a cache of 100,000, NODE 1 gets 1-100,000, NODE 2 get 100,001-200,000 to play with, and so on.
- If you are using RAC, and you have a heavy workload on one node and a low number of transactions on another node, there may be a significant disparity between the NEXTVAL on each node. I have seen this disparity manifest in the tens of millions.
- There is a global cache lock which must be passed from node to node when getting a new cache of sequence numbers on that node. It is lightweight, but it is a serialisation point. If your CACHE value is too low for your throughput, you will get contention on this lock.
- Same as the serialisation point if you specify ORDERED, so you get the sequences advancing in step (rather than each node having its own cache) on multiple RAC nodes. Don’t do this either.
- The point if sequences is to give you a unique value for a (surrogate) Primary Key. You can rely upon sequences to be unique. Well. Mostly. As long as they do not CYCLE.
- If you use sequences for a PK, you will probably end up with a right-hand index with hot-block contention during inserts, causing you to implement something suboptimal, like hash partitioning or reverse-key indexes to get around your design flaw.
Sequences on RAC can behave in unexpected ways. The below example is a little contrived but shows a potentially unexpected behaviour caused by CYCLEing your sequences:
NODE 1 | NODE 2 |
create sequence seq_1 start with 1 increment by 1 maxvalue 10 cache 2 cycle; Sequence created. |
|
NODE1 :select seq_1.nextval from dual; NEXTVAL: 1 |
|
NODE2 :select seq_1.nextval from dual; NEXTVAL 3 <- Gets next value above cache on node 1 |
|
NODE1 : / NEXTVAL: 2 |
|
NODE1 : / NEXTVAL: 5 (skipping 3 and 4 on Node 2) |
|
6 | |
7 | |
8 | |
9 | |
10 | |
1 | |
2 | |
3 <- erm. Hang on. Node 2 has this! | |
4 <- and this!!! | |
5 | |
6 | |
7 | |
8 | |
9 | |
10 | |
1 | |
NODE2 : / NEXTVAL 4 <- Reading from the local cache |
|
NODE2 : / NEXTVAL 3 <- And get a new cache value |
So, if you are using sequences and you can’t guarantee sequence and you can’t guarantee no gaps and, as this demo shows, you can’t guarantee uniqueness, and they cause hot block right hand index problems, tell me again why you are using them?
Because they are fast and you don’t CYCLE them.
However, sequences are becoming the new evil in the database, right behind triggers.