MySQL Slow query log export and review in RDS

A friend needed to analyze slow queries and for that he decides to use pt-query-digest tool. Though he got stuck at exporting slow queries from slow logs in RDS for two problems: encrypted data and slow logs in mysql table.

1. The queries are hexed in sql_field and unreadable when he executed SELECT from mysql.slow_log.

    start_time: 2023-05-30 06:52:46.808701
     insert_id: 0
last_insert_id: 0
      sql_text: 0x53454C454354202A2046524F4D206D795F7461626C65205748455245206964203D2031323331

2. The default log_output is set to “TABLE” and hence he couldn’t download the slow log. Note that this is unlike standard MySQL where default log_output is FILE.

Unreadable slow query in mysql slow_log

Is that mysql table for slow log encrypted? No!

mysql> SELECT TABLE_SCHEMA, TABLE_NAME, CREATE_OPTIONS FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_NAME = 'slow_log';
+--------------+------------+----------------+
| TABLE_SCHEMA | TABLE_NAME | CREATE_OPTIONS |
+--------------+------------+----------------+
| mysql        | slow_log   |                |
+--------------+------------+----------------+

The table definition for slow_log shows that the sql_text is a mediumblob column that’s storing the text of slow queries.

CREATE TABLE `slow_log` (
`start_time` timestamp(6) NOT NULL DEFAULT CURRENT_TIMESTAMP(6) ON UPDATE CURRENT_TIMESTAMP(6),
`user_host` mediumtext NOT NULL,
`query_time` time(6) NOT NULL,
`lock_time` time(6) NOT NULL,
`rows_sent` int(11) NOT NULL,
`rows_examined` int(11) NOT NULL,
`db` varchar(512) NOT NULL,
`last_insert_id` int(11) NOT NULL,
`insert_id` int(11) NOT NULL,
`server_id` int(10) unsigned NOT NULL,
`sql_text` mediumblob NOT NULL,
`thread_id` bigint(21) unsigned NOT NULL
) ENGINE=CSV DEFAULT CHARSET=utf8 COMMENT='Slow log'

To overcome the challenge of unreadable MySQL slow query log, we can utilize the CONVERT function to convert the binary sql_text field to a readable format, such as UTF-8 encoding.

Use the CONVERT function to fetch readable records from the slow_log table.

mysql> SELECT CONVERT(sql_text USING utf8) FROM mysql.slow_log LIMIT 1;
+----------------------------------------+
| CONVERT(sql_text USING utf8)           |
+----------------------------------------+
| SELECT * FROM my_table WHERE id = 1231 |
+----------------------------------------+
1 row in set (2.71 sec)

Exporting or Parsing MySQL slow logs to regular file

To export the slow query log from the table (mysql.slow_log) to a regular log file, use the following command:

mysql -h RDS_ENDPOINT -D mysql -s -r -e "SELECT CONCAT('# Time: ', DATE_FORMAT(start_time, '%y%m%d %H%i%s'), '\n', '# User@Host: ', user_host, '\n', '# Query_time: ', TIME_TO_SEC(query_time), ' Lock_time: ', TIME_TO_SEC(lock_time), ' Rows_sent: ', rows_sent, ' Rows_examined: ', rows_examined, '\n', CONVERT(sql_text USING utf8), ';') FROM mysql.slow_log LIMIT 10" > rds_slow_log.log

This command exports the slow logs to the rds_slow_log.log file, containing formatted log entries.

Finally you may run the pt-query-digest against the log file and you should be ready to start your query analysis.

MySQL log_output on AWS

It is recommended to use the log_output as FILE instead of TABLE. This is because writing logs to a file is more efficient, reliable, performant and simpler than logging to a table

AWS provides you with tools to extract the slow logs thus generated using the AWS CLI

aws rds download-db-log-file-portion

Here is my script on github to download MySQL slow log files from AWS: get_rds_slow_log.

Conclusion

By following this quick blog, you can convert RDS or Aurora slow logs which are stored in table to readable log files. This process empowers you to efficiently analyze slow queries and optimize your database performance.

Leave a Reply

Your email address will not be published. Required fields are marked *