Server environment
1 | $ cat /etc/issue |
Fake Data
There is no transaction operate, the table’s engine will use the myisam type.
Create mysql table
1 | CREATE TABLE `users` ( |
Using php script to insert datas
1 | $ mkdir fakeData;cd fakeData |
1 | $ vim main.php |
1 | <?php |
Check slow log
Open slow query log
1 | mysql> set global slow_query_log=on; |
Execute a query without data index
1 | mysql> select count(id) from users where age < 35 and age > 30 and sex = 1 and income > 10000 and country = 'Hong Kong'; |
Tail the slow query log
1 | $ tail -f /var/lib/mysql/vagrant-ubuntu-trusty-64-slow.log |
Notice this info ‘Lock_time: 0.000211 Rows_sent: 1 Rows_examined: 10150175’
Full scanning the whole table to get one record but locked the table 0.000211
Explain slow SQL query
1 | mysql> explain select count(id) from users where age < 35 and age > 30 and sex = 1 and income > 10000 and country = 'Hong Kong'; |
Query Cache
Mysql query cache is opened default and the default size is 16M.
1 | $ cat /etc/mysql/my.cnf | grep query_cache |
1 | mysql> show variables like '%query_cache%'; |
Suitable for very low update frequency and read only high query frequency. Otherwise it is not recommend to use.
Query cache size recommend to maintain with 100M size.
It is strict required two sql to be exactly the same. SQL statement, database protocol version, character set and other factors will affect the results
1 | mysql> set names latin1; SELECT * FROM table_name; |
The most important, query cache is controlled by a global lock in mysql, which will be locked if updating the query cache block.
For example , a query result is 20kb, and current ‘query_cache_min_res_unit’ setting is 4kb(the default value, adjustable), the the query results need to be written in query cache in five times.
It becomes a high costs.
The query cache is better for the reading operations.
Closing methods are simple, there are two way:
- set the options query_cache_type = 0 和 query_cache_size = 0;
- if compiled with source code MySQL, compiling with parameters –without-query-cache
Create Index
Type | Desc |
---|---|
Index | Allow to have same index |
UNIQUE Index | Not allow to have same index, value can be NULL |
PRIMARY KEY | Not allow to have same index, value can not be NULL, only one primary key in each table |
FULLTEXT Index | only support in myisam and english language |
1 | mysql> ALTER TABLE users ADD INDEX country_age_income (country,age,sex,income); |
Re-query above sql
1 | mysql> explain select count(id) from users where age < 35 and age > 30 and sex = 1 and income > 10000 and country = 'Hong Kong'; |
You can see the use of index “key=country_age_income”
1 | mysql> select count(id) from users where age < 35 and age > 30 and sex = 1 and income > 10000 and country = 'Hong Kong'; |
Time from 10.06s shortened to 0.52s by index
Optimize SQL query
Please google search base on your business background
Splitting Read operation in slave
Using Database Proxy
Separate in code
PHP framework can deal with this like Laravel,Symfony etc.
If you want to do this by yoursef ,the code logic looks like this:1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57<?php
class DB
{
private $writeConn;
private $readConns = [];
function __construct()
{
$this->initWriteConn();
$this->initReadConns();
}
private function initWriteConn()
{
$conn = mysql_connect('192.168.0.101:3306', 'root', '111111');
mysql_select_db('test');
mysql_query('set names utf8');
$this->writeConn = $conn;
}
private function initReadConns()
{
$conn1 = mysql_connect('192.168.0.105:3306', 'root', '111111');
mysql_select_db('users');
mysql_query('set names utf8');
$conn2 = mysql_connect('192.168.0.106:3306', 'root', '111111');
mysql_select_db('users');
mysql_query('set names utf8');
$this->readConns[] = $conn1;
$this->readConns[] = $conn2;
}
public function query($sql)
{
$queryStr = trim($sql);
$queryType = strtolower(substr($queryStr, 0, 6));
if ($queryType == 'select') {
$result = $this->readQuery($queryStr);
} else {
$result = $this->writeQuery($queryStr);
}
return $result;
}
}
$sql1 = "select * from users";
$sql2 = "insert into users (name) values ('Alice')";
$sql3 = "delete from users where id = 5";
$sql4 = "update users set name='Jerry' where id = 4";
$db = new DB();
$result1 = $db->query($sql1);
$result2 = $db->query($sql2);
$result3 = $db->query($sql3);
$result4 = $db->query($sql4);
Partition
Please read another blog About Mysql Partition
Vertical Split Table
Based on the relational database column to split table with lots of columns.
Split the filed which will not be used usually or with long length into a extended table.
Horizontal Split Table
According to rules, different data row in the table will be distributed to different database tables
Please read another blog Mysql Table Horizontal Split By Mycat
Vertical Split Database
In accordance with the business module to separate different databases
Meet some errors
Cannot allocate memory
Increase your system swap, here i will add 8G swap, adjust according to your actual situation!1
2
3
4
5
6
7$ php artisan db:seed
mmap() failed: [12] Cannot allocate memory
mmap() failed: [12] Cannot allocate memory
PHP Fatal error: Out of memory (allocated 2160070656) (tried to allocate 2147483656 bytes) in /home/liang/learnlaravel5/vendor/laravel/framework/src/Illuminate/Database/Eloquent/FactoryBuilder.php on line 194
In FactoryBuilder.php line 194:
Out of memory (allocated 2160070656) (tried to allocate 2147483656 bytes)
1 | $ dd if=/dev/zero of=/mnt/swapfile bs=1M count=8192 |
This problem still appears. I'm using Laravel to create a lot of mysql recode and these data are created in memory.
To fix this problem, code in Laravel seed must be changed.
Maximum retries of 10000 reached without finding a unique value
When using faker mock the email data, it seems that the max unique email number is 10 million.
Mysql Command Tips
show columns:
1 | mysql> desc {TABLE_NAME}; |
update table name:
1 | mysql> alter table {OLD_TABLE_NAME} rename to {NEW_TABLE_NAME}; |
add column:
1 | mysql> alter table {TABLE_NAME} add column {COLUMN_NAME} varchar(30); |
drop column:
1 | mysql> alter table {TABLE_NAME} drop column {COLUMN_NAME}; |
update column property:
1
mysql> alter table {TABLE_NAME} modify name varchar(22);
show table size:
1 | mysql> select concat(round(sum(DATA_LENGTH/1024/1024),2),'MB') as data from information_schema.TABLES WHERE TABLE_SCHEMA='{DATABASE_NAME}' and TABLE_NAME = '{TABLE_NAME}'; |
update database default character:
1 | mysql> ALTER DATABASE {DATABASE_NAME} CHARACTER SET = utf8mb4 COLLATE = utf8mb4_unicode_ci; |