Another use for generate_series: row multiplier

I had a request the other day: how many simultaneous users are on the site, by time of day. I already have a session database that’s computed nightly from weblogs: it contains the times at which each session started and ended.

CREATE TABLE sessions
(
user_id integer NOT NULL,
start_at timestamp without time zone,
end_at timestamp without time zone,
duration double precision,
views integer
)

I thought for sure the next step would be to dump some data, then write some Ruby or R to scan through sessions and see how many sessions were open at a time.

Until I came up with a nice solution in SQL (Postgres). Stepping back, if I can sample from sessions at say, one-minute intervals, I can count the number of distinct sessions open at each minute. What I need is a row per session per minute spanned. Generate_series is a “set returning function” that can do just that. In the snippet below, I use generate_series to generate a set of (whole) minutes from the start of the session to the end of the session. That essentially multiplies the session row into n rows, one for each of the minutes the session spans.

From there, it’s easy to do a straight forward group by, counting distinct user_id:

with rounded_sessions as (
select user_id, start_at, end_at,
generate_series(date_trunc('minute',start_at), end_at, '1 minute') to_the_minute from sessions
where start_at between '2012-01-21' and '2012-01-28'
)
select to_the_minute, count(distinct user_id) from rounded_sessions group by 1

The date_trunc call is important so that session rows are aligned to whole minutes, if that’s not done, then none of the rows will align for the counts.

That set won’t include rows that had no users logged in. To do that, the query below will use generate_series again to generate all the minutes from the first minute present to the last, then left join the counts to that set, coalescing missing entries to zero.


with rounded_sessions as (
select plm_users.user_id, start_at, end_at,
generate_series(date_trunc('minute',start_at), end_at, '1 minute') as to_the_minute
from sessions
where start_at between '2012-01-21' and '2012-01-28'
),
counts_by_minute as (
select to_the_minute, count(distinct user_id) from rounded_sessions
group by 1
),
all_the_minutes as (
select generate_series(min(to_the_minute), max(to_the_minute), '1 minute') as minute_fu from rounded_sessions
)

select to_the_minute , coalesce(count, 0) as users from all_the_minutes
left join counts_by_minute on all_the_minutes.minute_fu = counts_by_minute.to_the_minute

Computing Distinct Items Across Sliding Windows in SQL

As a member of PatientsLikeMe‘s Data team, from time to time we’re asked to compute how many unique users did action X on the site within a date range, say 28 days, or several date ranges (1,14,28 days for example). It’s easy enough to do that for a given day, but to do that for every day over a span of time (in one query) took some thinking. Here’s what I came up with.

One day at a time

First, a simplified example table:

create table events (
user_id integer,
event varchar,
date date
)

Getting unique user counts by event on any given day is easy. Below, we’ll get the counts of unique users by events for the 7 days leading up to Valentine’s day:

select count(distinct user_id), event from events
where date between '2011-02-07' and '2011-02-14'
group by 2

Now Do That For Every Day

The simplest thing that could possibly work is to just issue that query to compute the stats for the time span desired. We’re looking for something faster, and a bit more elegant.

Stepping back a bit, for a seven day time window, we’re asking that an event on 2/7/2011 count for that day, and also count for the 6 following days – effectively we’re mapping the events of each day onto itself and 6 other days. That sounds like a SQL join waiting to happen. Once the join happens, its easy to group by the mapped date, and do a distinct count.

With a table like the one below

from_date to_date
2011-01-01 2011-01-01
2011-01-01 2011-01-02
2011-01-01 2011-01-03
2011-01-01 2011-01-04
2011-01-01 2011-01-05
2011-01-01 2011-01-06
2011-01-01 2011-01-07
2011-01-02 2011-01-02

This SQL becomes easy.

select to_date, event, count(distinct user_id) from events
join dates_plus_7 on events.date = dates_plus_7.from_date
group by 1,2
to_date event count
2011-01-05 bar 20
2011-01-05 baz 27
2011-01-05 foo 24
2011-01-06 bar 31

You’ll then need to trim the ends of your data to adjust for where the windows ran off the edge of the data.
That works for me on Postgresql 8.4. Your mileage may vary with other brands.

How Do I Get One of Those?
A dates table like that is a one-liner using the generate_series method:

select date::date as from_date, date::date+plus_day as to_date from
generate_series('2011-01-01'::date, '2011-02-28'::date, '1 day') as date,
generate_series(0,6,1) as plus_day ;

There we get the cartesian product of the set of dates in the desired range, and the set of numbers from 0 to 6. Sum the two, treating the numbers as offsets and you’re done.

Redundant Indexing in PostgreSQL

If you have a table with a column included as the first column in a multi-column index and then again with it’s own index, you may be over indexing. Postgres will use the multi-column index for queries on the first column.

From the docs

A multicolumn B-tree index can be used with query conditions that involve any subset of the index’s columns, but the index is most efficient when there are constraints on the leading (leftmost) columns.


Performance

If you click around that section of the docs, you’ll surely come across the section on multi-column indexing and performance, in particular this section (bold emphasis mine):

You could also create a multicolumn index on (x, y). This index would typically be more efficient than index combination for queries involving both columns, but as discussed in Section 11.3, it would be almost useless for queries involving only y, so it should not be the only index. A combination of the multicolumn index and a separate index on y would serve reasonably well. For queries involving only x, the multicolumn index could be used, though it would be larger and hence slower than an index on x alone

Life is full of tradeoffs performance wise, so we should explore just how much slower it is to use a multi-column index for single column queries.

First, lets create a dummy table:

CREATE TABLE foos_and_bars
(
id serial NOT NULL,
foo_id integer,
bar_id integer,
CONSTRAINT foos_and_bars_pkey PRIMARY KEY (id)
)

Then, using R, we’ll create 3 million rows of nicely distributed data:

rows = 3000000
foo_ids = seq(1,250000,1)
bar_ids = seq(1,20,1)
data = data.frame(foo_id = sample(foo_ids, rows,TRUE), bar_id= sample(bar_ids,rows,TRUE))

Dump that to a text file and load it up with copy and we’re good to go.

Create the compound index

CREATE INDEX foo_id_and_bar_id_index
ON foos_and_bars
USING btree
(foo_id, bar_id);

Run a simple query to make sure the index is used:

test_foo=# explain analyze select * from foos_and_bars where foo_id = 123;
QUERY PLAN
---------------------------------------------------------------------------------------------------------------------------------
Bitmap Heap Scan on foos_and_bars  (cost=4.68..55.74 rows=13 width=12) (actual time=0.026..0.038 rows=8 loops=1)
Recheck Cond: (foo_id = 123)
->  Bitmap Index Scan on foo_id_and_bar_id_index  (cost=0.00..4.68 rows=13 width=0) (actual time=0.020..0.020 rows=8 loops=1)
Index Cond: (foo_id = 123)
Total runtime: 0.072 ms
(5 rows)

Now we’ll make 100 queries by foo_id with this index, and then repeat with the single index installed using this code:

require 'rubygems'
require 'benchmark'
require 'pg'

TEST_IDS = [...] #randomly selected 100 ids in R

conn = PGconn.open(:dbname => 'test_foo')
def perform_test(conn,foo_id)
time = Benchmark.realtime do
res = conn.exec("select * from foos_and_bars where foo_id = #{foo_id}")
res.clear
end
end

TEST_IDS.map {|id| perform_test(conn,id)} #warm things up?
data = TEST_IDS.map {|id| perform_test(conn,id)}

data.each do |d|
puts d
end

How do things stack up? I’d say about evenly:

Remember: Indexing isn’t free, and Postgres is pretty good at using (and reusing) your indexes, so you may not need to create as many as you think.

Multiple Phrase Search in PostgreSQL

http://www.monkeyatlarge.com/projects/drillable-stacked-time-series/
Tsearch, the full text search engine in PostgreSql, is great at rapidly searching for keywords (and combinations of keywords) in large bodies of text. It does not, however, excel at matching multi-word phrases. There are some techniques to work around that to let your application leverage tsearch to find phrases.

Before I go on, I’ll credit Paul Sephton’s Understanding Full Text Search for opening my eyes to some of the possibilities to enable phrase search on top of tsearch’s existing capabilities.

Tsearch operates on tsvectors and tsqueries. Tsvectors are a bag of words like structure – a list of the unique words appearing in a piece of text, along with their positions in the text. Searches are performed constructing a tsquery, which is boolean expression combining words with AND(&), OR(|), and NOT(!) operators, then comparing the tsquery against candidate tsvectors with the @@ operator.

select * from articles where to_tsvector('english',articles.body) @@ 'meatball & sub';

will match articles where the the body contains the word meatball and the word sub. If there’s an index on to_tsvector(‘english’,articles.body), this query is a very efficient index lookup.

Single Phrase Search

Now how do we match articles with the phrase “meatball sub”, anywhere in the article’s body? Doing the naive query

select * from articles where body like '%meatball sub%'

will work, but it will be slow because the leading wildcard kills any chance of using an index on that column. What we can do to make this go fast is the following:

select * from articles where to_tsvector('english',articles.body) @@ 'meatball & sub' AND body like '%meatball sub%'

This will use the full text index to find the set of articles where the body has both words, then that (presumably) smaller set of articles can be scanned for the words together.

Multi Phrase Search

It’s simple to extend the above query to match two phrases:

select * from articles where to_tsvector('english',articles.body) @@ 'meatball & sub & ham & sandwich' AND body like '%meatball sub%' AND body like '%ham sandwich%';

That query can be tightened up using postgres’s support for arrays:

select * from articles where to_tsvector('english',articles.body) @@ 'meatball & sub & ham & sandwich' AND body like ALL('{"%meatball sub%","%ham sandwich%"}')

Stepping back a bit, let’s define create a table called “concepts” to allow users of an application to store searches on lists of phrases, and let’s also allow the user to specify that all phrases must match, or just one of them.

CREATE TABLE concepts
(
id serial,
match_all boolean,
phrases character varying[],
query tsquery
)

Now we can specify and execute that previous search this way:

insert into concepts(match_all,phrases,query) VALUES(TRUE,'{"%meatball sub%","%ham sandwich%"}','meatball & sub & ham & sandwich');
select articles.*, join concepts on (concepts.query @@ to_tsvector(body)) AND ((match_all AND body like ALL(phrases)) OR (not match_all AND body like ANY(phrases)));

Where this approach really shines compared with an external text search tools is aggregate queries like counting up matching articles by date.

select count(distinct articles.id), articles.date from articles join concepts on (concepts.query @@ to_tsvector(body)) AND ((match_all AND body like ALL(phrases)) OR (not match_all AND body like ANY(phrases)))
group by articles.date

The logic to combine lists of phrases into the appropriate query based on the desire to match any or all of the phrases is easy to write at the application layer. It’s desirable not to have to include the wildcards into the phrase array, and it’s easy to write a function to do that at runtime.

CREATE OR REPLACE FUNCTION wildcard_wrapper(list varchar[]) RETURNS varchar[] AS $$
DECLARE
return_val varchar[];
BEGIN
for idx in 1 .. array_upper(list, 1)
loop
return_val[idx] := '%' || list[idx] || '%';
end loop;
return return_val;
END;
$$ LANGUAGE plpgsql;

With that function good to go we can make that long query just a little longer:

select count(distinct articles.id), articles.date from articles join concepts on (concepts.query @@ to_tsvector(body)) AND ((match_all AND body like ALL(wildcard_wrapper(phrases))) OR (not match_all AND body like ANY(wildcard_wrapper(phrases))))
group by articles.date

It’s straightforward to collapse most, if not all of the sql on clause into a plpgsql function call without adversely affecting the query plan – it’s important that the tsvector index be involved in the query for adequate performance.

Further Work

This approach works well for lists of phrases. To support boolean logic on phrases, one approach might be to compile the request down to a tsquery as above, along with a regular expression to winnow down the matches to those containing the phrases.

Deferring Index costs for table to table copies in PostgreSQL

When bulk copying data to a table, it is much faster if the destination table is index and constraint free, because it is cheaper to build an index once than maintain it over many inserts. For postgres, the pg_restore and SQL COPY commands can do this, but they both require that data be copied from the filesystem rather than directly from another table.

For table to table copying (and transformations) the situation isn’t as straight-forward. Recently I was working on a problem where we needed to perform some poor-man’s ETL, copying and transforming data between tables in different schemas. Since some of the destination tables were heavily indexed(including a full text index) the task took quite a while. In talking with a colleague about the problem, we came up with the idea of dropping the indexes and constraints prior to the data load, and restoring them afterwards.

First stop: how to get the DDL for indices on a table in postgres? Poking around the postgres catalogs, I managed to find a function pg_get_indexdef that would return the DDL for an index. Combining that with a query I found in a forum somewhere and altered, I came up with this query to get the names and DDL of all the indices on a table. (this one excludes the primary key index)

With that and the query to do the same for constraints its straightforward to build a helper function that will get the DDL for all indices and constraints, drop them, yield to evaluate a block and then restore the indices and constraints. The method is below:

Use of the function would look like the snippet below. This solution would also allow for arbitrarily complex transformations in Ruby as well as pure SQL.

For my task loading and transforming data into about 20 tables, doing this reduced the execution time by two-thirds. Of course, your mileage may vary depending how heavily indexed your destination tables are.

Here’s the whole module: