How do I dump the data of some SQLite3 tables? – This article will take you through the common SQL errors that you might encounter while working with sql, sqlite, . The wrong arrangement of keywords will certainly cause an error, but wrongly arranged commands may also be an issue. SQL keyword errors occur when one of the words that the SQL query language reserves for its commands and clauses is misspelled. If the user wants to resolve all these reported errors, without finding the original one, what started as a simple typo, becomes a much bigger problem.
SQL Problem :
How do I dump the data, and only the data, not the schema, of some SQLite3 tables of a database (not all the tables)?
The dump should be in SQL format, as it should be easily re-entered into the database later and should be done from the command line. Something like
sqlite3 db .dump
but without dumping the schema and selecting which tables to dump.
Solution :
You’re not saying what you wish to do with the dumped file.
To get a CSV file (which can be imported into almost everything)
.mode csv
-- use '.separator SOME_STRING' for something other than a comma.
.headers on
.out file.csv
select * from MyTable;
To get an SQL file (which can be reinserted into a different SQLite database)
.mode insert <target_table_name>
.out file.sql
select * from MyTable;
You can do this getting difference of .schema and .dump commands. for example with grep:
sqlite3 some.db .schema > schema.sql
sqlite3 some.db .dump > dump.sql
grep -vx -f schema.sql dump.sql > data.sql
data.sql
file will contain only data without schema, something like this:
BEGIN TRANSACTION;
INSERT INTO "table1" VALUES ...;
...
INSERT INTO "table2" VALUES ...;
...
COMMIT;
I hope this helps you.
You can specify one or more table arguments to the .dump special command, e.g.sqlite3 db ".dump 'table1' 'table2'"
.
Not the best way, but at lease does not need external tools (except grep, which is standard on *nix boxes anyway)
sqlite3 database.db3 .dump | grep '^INSERT INTO "tablename"'
but you do need to do this command for each table you are looking for though.
Note that this does not include schema.
Any answer which suggests using grep to exclude the CREATE
lines or just grab the INSERT
lines from the sqlite3 $DB .dump
output will fail badly. The CREATE TABLE
commands list one column per line (so excluding CREATE
won’t get all of it), and values on the INSERT
lines can have embedded newlines (so you can’t grab just the INSERT
lines).
for t in $(sqlite3 $DB .tables); do
echo -e ".mode insert $tnselect * from $t;"
done | sqlite3 $DB > backup.sql
Tested on sqlite3 version 3.6.20.
If you want to exclude certain tables you can filter them with $(sqlite $DB .tables | grep -v -e one -e two -e three)
, or if you want to get a specific subset replace that with one two three
.
As an improvement to Paul Egan’s answer, this can be accomplished as follows:
sqlite3 database.db3 '.dump "table1" "table2"' | grep '^INSERT'
–or–
sqlite3 database.db3 '.dump "table1" "table2"' | grep -v '^CREATE'
The caveat, of course, is that you have to have grep installed.
In Python or Java or any high level language the .dump does not work. We need to code the conversion to CSV by hand. I give an Python example. Others, examples would be appreciated:
from os import path
import csv
def convert_to_csv(directory, db_name):
conn = sqlite3.connect(path.join(directory, db_name + '.db'))
cursor = conn.cursor()
cursor.execute("SELECT name FROM sqlite_master WHERE type='table';")
tables = cursor.fetchall()
for table in tables:
table = table[0]
cursor.execute('SELECT * FROM ' + table)
column_names = [column_name[0] for column_name in cursor.description]
with open(path.join(directory, table + '.csv'), 'w') as csv_file:
csv_writer = csv.writer(csv_file)
csv_writer.writerow(column_names)
while True:
try:
csv_writer.writerow(cursor.fetchone())
except csv.Error:
break
If you have ‘panel data, in other words many individual entries with id’s add this to the with look and it also dumps summary statistics:
if 'id' in column_names:
with open(path.join(directory, table + '_aggregate.csv'), 'w') as csv_file:
csv_writer = csv.writer(csv_file)
column_names.remove('id')
column_names.remove('round')
sum_string = ','.join('sum(%s)' % item for item in column_names)
cursor.execute('SELECT round, ' + sum_string +' FROM ' + table + ' GROUP BY round;')
csv_writer.writerow(['round'] + column_names)
while True:
try:
csv_writer.writerow(cursor.fetchone())
except csv.Error:
break
Review of other possible solutions
Include only INSERTs
sqlite3 database.db3 .dump | grep '^INSERT INTO "tablename"'
Easy to implement but it will fail if any of your columns include new lines
SQLite insert mode
for t in $(sqlite3 $DB .tables); do
echo -e ".mode insert $tnselect * from $t;"
done | sqlite3 $DB > backup.sql
This is a nice and customizable solution, but it doesn’t work if your columns have blob objects like ‘Geometry’ type in spatialite
Diff the dump with the schema
sqlite3 some.db .schema > schema.sql
sqlite3 some.db .dump > dump.sql
grep -v -f schema.sql dump > data.sql
Not sure why, but is not working for me
Another (new) possible solution
Probably there is not a best answer to this question, but one that is working for me is grep the inserts taking into account that be new lines in the column values with an expression like this
grep -Pzo "(?s)^INSERT.*);[ t]*$"
To select the tables do be dumped .dump
admits a LIKE argument to match the table names, but if this is not enough probably a simple script is better option
TABLES='table1 table2 table3'
echo '' > /tmp/backup.sql
for t in $TABLES ; do
echo -e ".dump ${t}" | sqlite3 database.db3 | grep -Pzo "(?s)^INSERT.*?);$" >> /tmp/backup.sql
done
or, something more elaborated to respect foreign keys and encapsulate all the dump in only one transaction
TABLES='table1 table2 table3'
echo 'BEGIN TRANSACTION;' > /tmp/backup.sql
echo '' >> /tmp/backup.sql
for t in $TABLES ; do
echo -e ".dump ${t}" | sqlite3 $1 | grep -Pzo "(?s)^INSERT.*?);$" | grep -v -e 'PRAGMA foreign_keys=OFF;' -e 'BEGIN TRANSACTION;' -e 'COMMIT;' >> /tmp/backup.sql
done
echo '' >> /tmp/backup.sql
echo 'COMMIT;' >> /tmp/backup.sql
Take into account that the grep expression will fail if );
is a string present in any of the columns
To restore it (in a database with the tables already created)
sqlite3 -bail database.db3 < /tmp/backup.sql
According to the SQLite documentation for the Command Line Shell For SQLite you can export an SQLite table (or part of a table) as CSV, simply by setting the “mode” to “csv” and then run a query to extract the desired rows of the table:
sqlite> .header on
sqlite> .mode csv
sqlite> .once c:/work/dataout.csv
sqlite> SELECT * FROM tab1;
sqlite> .exit
Then use the “.import” command to import CSV (comma separated value) data into an SQLite table:
sqlite> .mode csv
sqlite> .import C:/work/dataout.csv tab1
sqlite> .exit
Please read the further documentation about the two cases to consider: (1) Table “tab1” does not previously exist and (2) table “tab1” does already exist.
The best method would be to take the code the sqlite3 db dump would do, excluding schema parts.
Example pseudo code:
SELECT 'INSERT INTO ' || tableName || ' VALUES( ' ||
{for each value} ' quote(' || value || ')' (+ commas until final)
|| ')' FROM 'tableName' ORDER BY rowid DESC
See: src/shell.c:838
(for sqlite-3.5.9) for actual code
You might even just take that shell and comment out the schema parts and use that.
The answer by retracile should be the closest one, yet it does not work for my case. One insert query just broke in the middle and the export just stopped. Not sure what is the reason. However It works fine during .dump
.
Finally I wrote a tool for the split up the SQL generated from .dump
:
You could do a select on the tables inserting commas after each field to produce a csv, or use a GUI tool to return all the data and save it to a csv.
Finding SQL syntax errors can be complicated, but there are some tips on how to make it a bit easier. Using the aforementioned Error List helps in a great way. It allows the user to check for errors while still writing the project, and avoid later searching through thousands lines of code.