I have a huge table (about 60 GB) in form of an archived CSV file. I want to transform it into an SQLite file.
What I do at the moment in the following:
import pandas
import sqlite3
cnx = sqlite3.connect('db.sqlite')
df = pandas.read_csv('db.gz', compression='gzip')
df.to_sql('table_name', cnx)
It works fine for smaller files but with the huge files I have memory problem. The problem is that pandas reads the whole table into memory (RAM) and then saves it into SQLite file.
Is there an elegant solution to this problem?
Aucun commentaire:
Enregistrer un commentaire