samedi 31 octobre 2015

SQLite performance on large tables

I've done quite a bit of research on my following issue, but couldn't find any helpful hint so far.

I'm using sqlite3 in python in an application running both under windows and linux. My database file is currently in the range of 700 MB.

I recognized one special performance issue regarding the number of entries in my largest table. It consists of 10 columns being integer and float numbers and one varchar.

The table has 1.6 Mio rows. For that size each SELECT or UPDATE command takes 327ms. That is by far too long for my application, since it mainly waits on sqlite now.

I recognized, that performance drastically increases with table size dropping. I found:

  • 1.6 Mio entries 327 ms
  • 670k entries 149 ms
  • 280k entries 71 ms
  • 147k entries 44 ms
  • 19k entries 15 ms

So access time almost linearily increases with table size.

Is there any way to improve performance here? Do I really need to split my data to multiple identical tables, even if this is very bad style?!?

Thanks in advance!

Aucun commentaire:

Enregistrer un commentaire