Getting the most out of Amazon RDS investment with Toad for MySQL
(SDD403) Amazon RDS for MySQL Deep Dive | AWS re:Invent 2014
-
Upload
amazon-web-services -
Category
Technology
-
view
1.596 -
download
1
description
Transcript of (SDD403) Amazon RDS for MySQL Deep Dive | AWS re:Invent 2014
November 14, 2014 | Las Vegas, NV
Pavan Pothukuchi, Principal Product Manager, RDS
Sajee Mathew, Solutions Architect, AWS
DB
Master
App
Backup
AWS Region
Replication
App
scp Load data
Staging server
DB Slave
SQL Flat files
mysql> GRANT SELECT,REPLICATION USER,REPLICATION CLIENT ON *.* TO repluser@‘<RDS Endpoint>' IDENTIFIED BY ‘<password>';
Create replication user on the master
Record the “File” and the “Position” in the backup
$ mysqldump --databases sampledb --master-data=2 --single-transaction -r sampledbdump.sql -u mysqluser –p mysqluserpassword
---- Position to start replication or point-in-time recovery from--
-- CHANGE MASTER TO MASTER_LOG_FILE='mysql-bin-changelog.000031', MASTER_LOG_POS=107;
Configure replication target
mysql> call mysql.rds_set_external_master(‘<master server>',3306,‘<replicationuser>',‘<password>','mysql-bin-changelog.000031',107,0);
mysql> call mysql.rds_start_replication;
Configure the replication target and start replication
Stop the app pointing at the source. Stop replication after target catches up
mysql> call mysql.rds_stop_replication;
Promote target Amazon RDS database instance
mysql> call mysql.rds_reset_external_master;
Point the app at the target Amazon RDS database instance
AWS Region
App
Dump Data
Staging server
scp & load
MasterSlave
mysql> call mysql.rds_set_configuration('binlog retention hours', 48);
Physical
synchronous
replication
AZ1 AZ2
DNS
cname update
Sync
replication
Async replication
[ERROR] Slave I/O: Got fatal error 1236 from master when reading data from binary log: ‘Client requested master to start replication from impossible position;
Source: https://blogs.oracle.com/MySQL/entry/mysql_5_6_replication_performance
AZ1 AZ2 AZ1
Reads10% Reads10% Reads10% Reads10%Reads10% Reads10% Reads10% Reads10%
Reads10%Writes10%
Reads10%Reads10%Writes10%
Reads10%Reads10%Writes10%
Reads10%Reads10%Writes10%
Reads10%
Reads10%Writes10%
Reads10%Reads10%Writes10%
Reads10%Reads10%Writes10%
Reads10%Reads10%Writes10%
Reads10%
Reads10%Writes10%Reads10%Writes10%Reads10%
Reads
90%
Writes10%
Primary
Writes10%
Replica1
Writes10%
Replica2
Writes10%
Replica3
Writes10%
Replica4
1X2X3X SCALE
Writes 20% Writes 20% Writes 20% Writes 20%
Reads
80%
Reads 20% Reads 20% Reads 20% Reads 20%
Reads 20%
Writes 20%
Reads 20%
Writes 20%
Reads 20%
Writes 20%
Reads 20%
Writes 20%
Writes 20%
Writes20%
Primary Replica1 Replica2 Replica3 Replica4
1X2X SCALE
50%
40%
30%
20%
10%
1
2
3
4
5
6
7
8
1 2 4 8 16 32
Scale
Scale based on % Write
4x
Load buffer pool
Dump buffer pool
0
5000
10000
15000
20000
25000
30000
10
30
50
70
90
11
0
13
0
15
0
17
0
19
0
21
0
23
0
25
0
27
0
29
0
31
0
33
0
35
0
37
0
39
0
41
0
43
0
45
0
47
0
49
0
51
0
53
0
55
0
57
0
59
0
61
0
63
0
65
0
67
0
69
0
Tra
nsa
cti
on
s p
er
Se
co
nd
Time (sec)
Workload with 50/50 R/W ratio
Unwarmed Cache
Warmed Cache4X
9 min
mysql> CREATE EVENT ‘evt_dump_innodb_cache’
ON SCHEDULE EVERY 1 HOUR STARTS ‘2014-11-06 01:00:00’
DO BEGIN CALL mysql.rds_innodb_buffer_pool_dump_now();
END
Considerations Options
Read Only
ReplicaPrimary R2MasterR/W
mysql> ALTER TABLE customers_address ADD COLUMN province VARCHAR(100);
pt-online-schema-change --alter "ADD COLUMN province VARCHAR(100)" --execute h=localhost,D=bench,t=customers_address,u=admin,p=admin
0
1000
2000
3000
4000
5000
6000
7000
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
Tra
ns
ac
tio
ns
pe
r S
ec
on
d (
TP
S)
Hours
100% Read - 20GB data
db.m1.medium + 200GB standard
$0.575 per hour
0
1000
2000
3000
4000
5000
6000
7000
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
Tra
ns
ac
tio
ns
pe
r S
ec
on
d (
TP
S)
Hours
100% Read - 20GB data
db.m1.medium + 200GB standard
db.m3.medium + 200G + 2000 IOPS
$0.575 per hour
$0.408 per hour
0
1000
2000
3000
4000
5000
6000
7000
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
Tra
ns
ac
tio
ns
pe
r S
ec
on
d (
TP
S)
Hours
100% Read - 20GB data
db.m1.medium + 200GB standard
db.m3.medium + 200G + 2000 IOPS
db.m3.large + 200G + 2000 IOPS
$0.575 per hour
$0.408 per hour
$0.508 per hour
0
1000
2000
3000
4000
5000
6000
7000
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
Tra
ns
ac
tio
ns
pe
r S
ec
on
d (
TP
S)
Hours
100% Read - 20GB data
db.m1.medium + 200GB standard
db.m3.medium + 200G + 2000 IOPS
db.m3.large + 200G + 2000 IOPS
db.t2.medium + 200GB gp2
$0.105 per hour
$0.575 per hour
$0.408 per hour
$0.508 per hour
0
1000
2000
3000
4000
5000
6000
7000
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
Tra
ns
ac
tio
ns
pe
r S
ec
on
d (
TP
S)
Hours
100% Read - 20GB data
db.m1.medium + 200GB standard
db.m3.medium + 200G + 2000 IOPS
db.m3.large + 200G + 2000 IOPS
db.t2.medium + 200GB gp2
db.t2.medium + 1TB gp2
$0.105 per hour
$0.575 per hour
$0.233 per hour
$0.408 per hour
$0.508 per hour
http://bit.ly/awsevals