{"id":2914,"date":"2023-05-30T09:05:57","date_gmt":"2023-05-30T09:05:57","guid":{"rendered":"http:\/\/kedar.nitty-witty.com\/?p=2914"},"modified":"2025-09-21T11:58:58","modified_gmt":"2025-09-21T11:58:58","slug":"mysql-slow-query-log-export-and-review-in-rds","status":"publish","type":"post","link":"https:\/\/kedar.nitty-witty.com\/blog\/mysql-slow-query-log-export-and-review-in-rds","title":{"rendered":"MySQL Slow query log export and download in RDS"},"content":{"rendered":"\n<p>A friend needed to analyze slow queries and for that he decides to use pt-query-digest tool. Though he got stuck at exporting slow queries from slow logs in RDS for two problems: encrypted data and slow logs in mysql table.<\/p>\n\n\n\n<p>1. The queries are hexed in sql_field and unreadable when he executed SELECT from mysql.slow_log.<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>    start_time: 2023-05-30 06:52:46.808701\n     insert_id: 0\nlast_insert_id: 0\n      <strong>sql_text:<\/strong> <strong>0x53454C454354202A2046524F4D206D795F7461626C65205748455245206964203D2031323331<\/strong><\/code><\/pre>\n\n\n\n<p>2. The default log_output is set to &#8220;TABLE&#8221; and hence he couldn&#8217;t download the slow log. Note that this is unlike standard MySQL where default log_output is FILE.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Unreadable slow query in mysql slow_log<\/h3>\n\n\n\n<p>Is that mysql table for slow log encrypted? No!<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">mysql&gt; SELECT TABLE_SCHEMA, TABLE_NAME, CREATE_OPTIONS FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_NAME = 'slow_log';\n+--------------+------------+----------------+\n| TABLE_SCHEMA | TABLE_NAME | CREATE_OPTIONS |\n+--------------+------------+----------------+\n| mysql        | slow_log   |                |\n+--------------+------------+----------------+<\/pre>\n\n\n\n<p>The table definition for slow_log shows that the sql_text is a mediumblob column that&#8217;s storing the text of slow queries.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">CREATE TABLE `slow_log` (\n`start_time` timestamp(6) NOT NULL DEFAULT CURRENT_TIMESTAMP(6) ON UPDATE CURRENT_TIMESTAMP(6),\n`user_host` mediumtext NOT NULL,\n`query_time` time(6) NOT NULL,\n`lock_time` time(6) NOT NULL,\n`rows_sent` int(11) NOT NULL,\n`rows_examined` int(11) NOT NULL,\n`db` varchar(512) NOT NULL,\n`last_insert_id` int(11) NOT NULL,\n`insert_id` int(11) NOT NULL,\n`server_id` int(10) unsigned NOT NULL,\n<strong>`sql_text` mediumblob NOT NULL,<\/strong>\n`thread_id` bigint(21) unsigned NOT NULL\n) ENGINE=CSV DEFAULT CHARSET=utf8 COMMENT='Slow log'<\/pre>\n\n\n\n<p>To overcome the challenge of unreadable MySQL slow query log, we can utilize the CONVERT function to convert the binary sql_text field to a readable format, such as UTF-8 encoding.<\/p>\n\n\n\n<p>Use the CONVERT function to fetch readable records from the slow_log table.<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>mysql&gt; SELECT <strong>CONVERT(sql_text USING utf8)<\/strong> FROM mysql.slow_log LIMIT 1;\n+----------------------------------------+\n| CONVERT(sql_text USING utf8)           |\n+----------------------------------------+\n| SELECT * FROM my_table WHERE id = 1231 |\n+----------------------------------------+\n1 row in set (2.71 sec)<\/code><\/pre>\n\n\n\n<h2 class=\"wp-block-heading\">Exporting or Parsing MySQL slow logs to regular file<\/h2>\n\n\n\n<p>To export the slow query log from the table (mysql.slow_log) to a regular log file, use the following command:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>mysql -h RDS_ENDPOINT -D mysql -s -r -e \"SELECT CONCAT('# Time: ', DATE_FORMAT(start_time, '%y%m%d %H%i%s'), '\\n', '# User@Host: ', user_host, '\\n', '# Query_time: ', TIME_TO_SEC(query_time), ' Lock_time: ', TIME_TO_SEC(lock_time), ' Rows_sent: ', rows_sent, ' Rows_examined: ', rows_examined, '\\n', <strong>CONVERT(sql_text USING utf8)<\/strong>, ';') FROM mysql.slow_log LIMIT 10\" &gt; rds_slow_log.log<\/code><\/pre>\n\n\n\n<p>This command exports the slow logs to the rds_slow_log.log file, containing formatted log entries.<\/p>\n\n\n\n<p>Finally you may run the pt-query-digest against the log file and you should be ready to start your query analysis.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">MySQL log_output on AWS<\/h2>\n\n\n\n<p>It is recommended to use the log_output as FILE instead of TABLE. This is because writing logs to a file is more efficient, reliable, performant and simpler than logging to a table<\/p>\n\n\n\n<p>AWS provides you with tools to extract the slow logs thus generated using the AWS CLI <\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>aws rds download-db-log-file-portion<\/code><\/pre>\n\n\n\n<p>Here is my script on github to download MySQL slow log files from AWS: <a href=\"https:\/\/github.com\/kedarvj\/AWS-scripts\/blob\/master\/get_rds_slow_log\" target=\"_blank\" rel=\"noopener nofollow\" title=\"get_rds_slow_log\">get_rds_slow_log<\/a>.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Conclusion<\/h2>\n\n\n\n<p>By following this quick blog, you can convert RDS or Aurora slow logs which are stored in table to readable log files. This process empowers you to efficiently analyze slow queries and optimize your database performance. <\/p>\n","protected":false},"excerpt":{"rendered":"A friend needed to analyze slow queries and for that he decides to use pt-query-digest tool. Though he got stuck at exporting slow queries from slow logs in RDS for&hellip;\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"om_disable_all_campaigns":false,"_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"footnotes":""},"categories":[868,8,377],"tags":[601,1139,605,427,604,599,603],"class_list":{"0":"post-2914","1":"post","2":"type-post","3":"status-publish","4":"format-standard","6":"category-aws-rds","7":"category-mysql","8":"category-mysql-articles","9":"tag-decrypt-slow-log","10":"tag-download-slow-log-rds","11":"tag-log_output","12":"tag-mysql","13":"tag-mysql-slow-log","14":"tag-slow-query-log","15":"tag-slow-query-review"},"aioseo_notices":[],"amp_enabled":true,"_links":{"self":[{"href":"https:\/\/kedar.nitty-witty.com\/blog\/wp-json\/wp\/v2\/posts\/2914","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/kedar.nitty-witty.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/kedar.nitty-witty.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/kedar.nitty-witty.com\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/kedar.nitty-witty.com\/blog\/wp-json\/wp\/v2\/comments?post=2914"}],"version-history":[{"count":6,"href":"https:\/\/kedar.nitty-witty.com\/blog\/wp-json\/wp\/v2\/posts\/2914\/revisions"}],"predecessor-version":[{"id":3524,"href":"https:\/\/kedar.nitty-witty.com\/blog\/wp-json\/wp\/v2\/posts\/2914\/revisions\/3524"}],"wp:attachment":[{"href":"https:\/\/kedar.nitty-witty.com\/blog\/wp-json\/wp\/v2\/media?parent=2914"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/kedar.nitty-witty.com\/blog\/wp-json\/wp\/v2\/categories?post=2914"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/kedar.nitty-witty.com\/blog\/wp-json\/wp\/v2\/tags?post=2914"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}