Is PostgreSQL IN() statement still fast with up to 1000 arguments? -


I ask them to return all the rows from one line except those that are in some list of those values Questions are continuous on time. Like SELECT * table, the WHERE ID (%), and the% value is guaranteed to be a list, although not a subquery, in some cases this list of values ​​can be up to 1000 elements long. Should I limit it to a small novelist (as low as 50-100 elements is as low as I will go in, in this case) or will there be a negligible performance benefit?

I think this is a big table, otherwise it does not matter much.

Depending on the number of tables and the number of keys, this can be triggered if there are many IN keys in a sequence scan, then postgres often do not use any index for it. Select more keys, big chance of a sequence scan .

If you use any other indexed column in WHERE , such as:

 choose  from the table Where ID (%) and my_date & gt; '2010-01-01'; There is a possibility of getting all the rows that match  

Indexed ( my_date ) Columns , and Then scan on an in-memory.

Using JOIN in any constant or temporal table, but not to help it still needs to detect all the rows With either the nested loop (unlikely of large data), or to have a hash / merge

I say that the solution is:

  • Use something as the IN key.
  • When possible, use other parameters to index and query. If IN needs to be scanned in memory in all rows, then at least thanks to some of those additional parameters.

Comments

Popular posts from this blog

Eclipse CDT variable colors in editor -

AJAX doesn't send POST query -

wpf - Custom Message Box Advice -