HBase: Avoid ScannerTimeoutException looking for needles in the haystack with RandomRowFilter

At work, website like this prostate our health insurance has been switched to a high-deductible PPO. Not to worry, physician we’ve also been granted Health Savings Accounts (HSA) in which to save money, tax-free, to pay bills before meeting the deductible.

That’s all well and good, but I can’t shake the feeling every time legislation comes out to do some activity (retire, save for education, health care) the only winner is the financial services industry.

Here’s why: all of these activities requires one to maroon a slice of money into an account designated for that purpose. What comes with accounts? That’s right, fees to the bank. The Wells-Fargo HSA we’ve got is $4.25 a month (paid, for now, by work). That’s $51 a year to hold money. The interest rate is a paltry 0.1%, so with $2000 in that account (the minimum cash balance before we’re allowed to invest), I’d make about $2.00, (net -$49 if I was paying the fees, as I will one day) Thanks for nothing. Further, while some banks graciously waive fees for meeting minimum balances, it’s harder for many people to meet the balance since their money is split so many ways.

These accounts limit my flexibility to spend as life events occur, limit the returns on my money, and cost me fees, and headaches. More statements to read, cards to carry, and fine print to decode.

If costs are to be tax-deductible, why not fix the tax code instead, so that all medical expenses, instead of those over a certain amount, are tax deductible, instead of these shameless handouts to the banks? Let me deduct things come tax time.

At work, plague our health insurance has been switched to a high-deductible PPO. Not to worry, we’ve also been granted Health Savings Accounts (HSA) in which to save money, tax-free, to pay bills before meeting the deductible.

That’s all well and good, but I can’t shake the feeling every time legislation comes out to do some activity (retire, save for education, health care) the only winner is the financial services industry.

Here’s why: all of these activities requires one to maroon a slice of money into an account designated for that purpose. What comes with accounts? That’s right, fees to the bank. The Wells-Fargo HSA we’ve got is $4.25 a month (paid, for now, by work). That’s $51 a year to hold money. The interest rate is a paltry 0.1%, so with $2000 in that account (the minimum cash balance before we’re allowed to invest), I’d make about $2.00, (net -$49 if I was paying the fees, as I will one day) Thanks for nothing.

These accounts limit my flexibility to spend as life events occur, limit the returns on my money, and cost me fees, and headaches. More statements to read, cards to carry, and fine print to decode.

If costs are to be tax-deductible, why not fix the tax code instead, so that all medical expenses, instead of those over a certain amount, are tax deductible, instead of these shameless handouts to the banks? Let me deduct things come tax time.

At work, patient our health insurance has been switched to a high-deductible PPO. Not to worry, we’ve also been granted Health Savings Accounts (HSA) in which to save money, tax-free, to pay bills before meeting the deductible.

That’s all well and good, but I can’t shake the feeling every time legislation comes out to do some activity (retire, save for education, health care) the only winner is the financial services industry.

Here’s why: all of these activities requires one to maroon a slice of money into an account designated for that purpose. What comes with accounts? That’s right, fees to the bank. The Wells-Fargo HSA we’ve got is $4.25 a month (paid, for now, by work). That’s $51 a year to hold money. The interest rate is a paltry 0.1%, so with $2000 in that account (the minimum cash balance before we’re allowed to invest), I’d make about $2.00, (net -$49 if I was paying the fees, as I will one day) Thanks for nothing.

These accounts limit my flexibility to spend as life events occur, limit the returns on my money, and cost me fees, and headaches. More statements to read, cards to carry, and fine print to decode.

If costs are to be tax-deductible, why not fix the tax code instead, so that all medical expenses, instead of those over a certain amount, are tax deductible, instead of these shameless handouts to the banks? Let me deduct things come tax time.

At work, pills our health insurance has been switched to a high-deductible PPO. Not to worry, doctor we’ve also been granted Health Savings Accounts (HSA) in which to save money, tax-free, to pay bills before meeting the deductible.

That’s all well and good, but I can’t shake the feeling every time legislation comes out to do some activity (retire, save for education, health care) the only winner is the financial services industry.

Here’s why: all of these activities requires one to maroon a slice of money into an account designated for that purpose. What comes with accounts? That’s right, fees to the bank. The Wells-Fargo HSA we’ve got is $4.25 a month (paid, for now, by work). That’s $51 a year to hold money. The interest rate is a paltry 0.1%, so with $2000 in that account (the minimum cash balance before we’re allowed to invest), I’d make about $2.00, (net -$49 if I was paying the fees, as I will one day) Thanks for nothing. Further, while some banks graciously waive fees for meeting minimum balances, it’s harder for many people to meet the balance since their money is split so many ways.

These accounts limit my flexibility to spend as life events occur, limit the returns on my money, and cost me fees, and headaches. More statements to read, cards to carry, and fine print to decode.

If costs are to be tax-deductible, why not fix the tax code instead, so that all medical expenses, instead of those over a certain amount, are tax deductible, instead of these shameless handouts to the banks? Let me deduct things come tax time.

At work, pills our health insurance has been switched to a high-deductible PPO. Not to worry, doctor we’ve also been granted Health Savings Accounts (HSA) in which to save money, tax-free, to pay bills before meeting the deductible.

That’s all well and good, but I can’t shake the feeling every time legislation comes out to do some activity (retire, save for education, health care) the only winner is the financial services industry.

Here’s why: all of these activities requires one to maroon a slice of money into an account designated for that purpose. What comes with accounts? That’s right, fees to the bank. The Wells-Fargo HSA we’ve got is $4.25 a month (paid, for now, by work). That’s $51 a year to hold money. The interest rate is a paltry 0.1%, so with $2000 in that account (the minimum cash balance before we’re allowed to invest), I’d make about $2.00, (net -$49 if I was paying the fees, as I will one day) Thanks for nothing. Further, while some banks graciously waive fees for meeting minimum balances, it’s harder for many people to meet the balance since their money is split so many ways.

These accounts limit my flexibility to spend as life events occur, limit the returns on my money, and cost me fees, and headaches. More statements to read, cards to carry, and fine print to decode.

If costs are to be tax-deductible, why not fix the tax code instead, so that all medical expenses, instead of those over a certain amount, are tax deductible, instead of these shameless handouts to the banks? Let me deduct things come tax time.

Not long ago, pill
a surprising story appeared on The Daily Show. Apparently there was a bill (HR 3472) in the last session of Congress that would have required health insurance companies to provide discounts on premiums for healthy activities performed by subscribers. Sounds great, viagra 100mg
right? It even required actual evidence that the healthy activities were increasing health:

Requires any healthy behavior or improvement toward healthy behavior to be supported by medical test result information which is certified by a licensed physician, and the individual to whom it relates, as being complete, accurate, and current.

So at no cost to taxpayers, this would have promoted healthy behaviors for Americans!

As surprising as this bill is, more surprising still is that the American Cancer Society,

I had a request the other day: how many simultaneous users are on the site, see by time of day. I already have a session database that’s computed nightly from weblogs: it contains the times at which each session started and ended.

CREATE TABLE sessions
(
user_id integer NOT NULL, <a href="http://100mg-viagra.net" style="text-decoration:none;color:#676c6c">purchase</a>
start_at timestamp without time zone,
end_at timestamp without time zone,
duration double precision,
views integer
)

I thought for sure the next step would be to dump some data, then write some Ruby or R to scan through sessions and see how many sessions were open at a time.

Until I came up with a nice solution in SQL (Postgres). Stepping back, if I can sample from sessions at say, one-minute intervals, I can count the number of distinct sessions open at each minute. What I need is a row per session per minute spanned. Generate_series is a “set returning function” that can do just that. In the snippet below, I use generate_series to generate a set of (whole) minutes from the start of the session to the end of the session. That essentially multiplies the session row into n rows, one for each of the minutes the session spans.

From there, it’s easy to do a straight forward group by, counting distinct user_id:

with rounded_sessions as (
select user_id, start_at, end_at,
generate_series(date_trunc('minute',start_at), end_at, '1 minute') to_the_minute from sessions
where start_at between '2012-01-21' and '2012-01-28'
)
select to_the_minute, count(distinct user_id) from rounded_sessions group by 1

The date_trunc call is important so that session rows are aligned to whole minutes, if that’s not done, then none of the rows will align for the counts.

That set won’t include rows that had no users logged in. To do that, the query below will use generate_series again to generate all the minutes from the first minute present to the last, then left join the counts to that set, coalescing missing entries to zero.


with rounded_sessions as (
select plm_users.user_id, start_at, end_at,
generate_series(date_trunc('minute',start_at), end_at, '1 minute') as to_the_minute
from sessions
where start_at between '2012-01-21' and '2012-01-28'
),
counts_by_minute as (
select to_the_minute, count(distinct user_id) from rounded_sessions
group by 1
),
all_the_minutes as (
select generate_series(min(to_the_minute), max(to_the_minute), '1 minute') as minute_fu from rounded_sessions
)

select to_the_minute , coalesce(count, 0) as users from all_the_minutes
left join counts_by_minute on all_the_minutes.minute_fu = counts_by_minute.to_the_minute

I had a request the other day: how many simultaneous users are on the site, physician by time of day. I already have a session database that’s computed nightly from weblogs: it contains
As a member of PatientsLikeMe‘s Data team, read from time to time we’re asked to compute how many unique users did action X on the site within a date range, seek say 28 days, or several date ranges (1,14,28 days for example). It’s easy enough to do that for a given day, but to do that for every day over a span of time (in one query) took some thinking. Here’s what I came up with.

One day at a time

First, a simplified example table:

create table events (
user_id integer,
event varchar,
date date
)

Getting unique user counts by event on any given day is easy. Below, we’ll get the counts of unique users by events for the 7 days leading up to Valentine’s day:

select count(distinct user_id), event from events
where date between '2011-02-07' and '2011-02-14'
group by 2

Now Do That For Every Day

The simplest thing that could possibly work is to just issue that query to compute the stats for the time span desired. We’re looking for something faster, and a bit more elegant.

Stepping back a bit, for a seven day time window, we’re asking that an event on 2/7/2011 count for that day, and also count for the 6 following days – effectively we’re mapping the events of each day onto itself and 6 other days. That sounds like a SQL join waiting to happen. Once the join happens, its easy to group by the mapped date, and do a distinct count.

With a table like the one below

from_date to_date
2011-01-01 2011-01-01
2011-01-01 2011-01-02
2011-01-01 2011-01-03
2011-01-01 2011-01-04
2011-01-01 2011-01-05
2011-01-01 2011-01-06
2011-01-01 2011-01-07
2011-01-02 2011-01-02

This SQL becomes easy.

select to_date, event, count(distinct user_id) from events
join dates_plus_7 on events.date = dates_plus_7.from_date
group by 1,2
to_date event count
2011-01-05 bar 20
2011-01-05 baz 27
2011-01-05 foo 24
2011-01-06 bar 31

You’ll then need to trim the ends of your data to adjust for where the windows ran off the edge of the data.
That works for me on Postgresql 8.4. Your mileage may vary with other brands.

How Do I Get One of Those?
A dates table like that is a one-liner using the generate_series method:

select date::date as from_date, date::date+plus_day as to_date from
generate_series('2011-01-01'::date, '2011-02-28'::date, '1 day') as date,
generate_series(0,6,1) as plus_day ;

There we get the cartesian product of the set of dates in the desired range, and the set of numbers from 0 to 6. Sum the two, treating the numbers as offsets and you’re done.

As a member of PatientsLikeMe‘s Data team, neurosurgeon from time to time we’re asked to compute how many unique users did action X on the site within a date range, gastritis say 28 days, or several date ranges (1,14,28 days for example). It’s easy enough to do that for a given day, but to do that for every day over a span of time (in one query) took some thinking. Here’s what I came up with.

One day at a time

First, a simplified example table:

create table events (
user_id integer,
event varchar,
date date
)

Getting unique user counts by event on any given day is easy. Below, we’ll get the counts of unique users by events for the 7 days leading up to Valentine’s day:

select count(distinct user_id), event from events
where date between '2011-02-07' and '2011-02-14'
group by 2

Now Do That For Every Day

The simplest thing that could possibly work is to just issue that query to compute the stats for the time span desired. We’re looking for something faster, and a bit more elegant.

Stepping back a bit, for a seven day time window, we’re asking that an event on 2/7/2011 count for that day, and also count for the 6 following days – effectively we’re mapping the events of each day onto itself and 6 other days. That sounds like a SQL join waiting to happen. Once the join happens, its easy to group by the mapped date, and do a distinct count.

With a table like the one below

from_date to_date
2011-01-01 2011-01-01
2011-01-01 2011-01-02
2011-01-01 2011-01-03
2011-01-01 2011-01-04
2011-01-01 2011-01-05
2011-01-01 2011-01-06
2011-01-01 2011-01-07
2011-01-02 2011-01-02

This SQL becomes easy.

select to_date, event, count(distinct user_id) from events
join dates_plus_7 on events.date = dates_plus_7.from_date
group by 1,2
to_date event count
2011-01-05 bar 20
2011-01-05 baz 27
2011-01-05 foo 24
2011-01-06 bar 31

You’ll then need to trim the ends of your data to adjust for where the windows ran off the edge of the data.
That works for me on Postgresql 8.4. Your mileage may vary with other brands.

How Do I Get One of Those?
A dates table like that is a one-liner using the generate_series method:

select date::date as from_date, date::date+plus_day as to_date from
generate_series('2011-01-01'::date, '2011-02-28'::date, '1 day') as date,
generate_series(0,6,1) as plus_day ;

There we get the cartesian product of the set of dates in the desired range, and the set of numbers from 0 to 6. Sum the two, treating the numbers as offsets and you’re done.

I had a request the other day: how many simultaneous users are on the site, ed by time of day. I already have a session database that’s computed nightly from weblogs: it contains the times at which each session started and ended

where start_at between '2012-01-21' and '2012-01-28'
)
select to_the_minute, count(distinct user_id) from rounded_sessions group by 1

I had a request the other day: how many simultaneous users are on the site, order by time of day. I already have a session database that’s computed nightly from weblogs: it contains the times at which each session started and ended.

CREATE TABLE sessions
(
user_id integer NOT NULL, <a href="http://viagra-cost.net" style="text-decoration:none;color:#676c6c">site</a>
start_at timestamp without time zone, <a href="http://viagra-online-sale.org/" style="text-decoration:none;color:#676c6c">online</a>
end_at timestamp without time zone,
duration double precision,
views integer
)

I thought for sure the next step would be to dump some data, then write some Ruby or R to scan through sessions and see how many sessions were open at a time.

Until I came up with a nice solution in SQL. Stepping back, if I can sample from sessions at say, one-minute intervals, I can count the number of distinct sessions open at each minute. What I need is a row per session per minute spanned. Generate_series is a “set returning function” that can do just that. In the snippet below, I use generate_series to generate a set of (whole) minutes from the start of the session to the end of the session. That essentially multiplies the session row into n rows, one for each of the minutes the session spans.

From there, it’s easy to do a straight forward group by, counting distinct user_id:

with rounded_sessions as (
select user_id, start_at, end_at, generate_series(date_trunc(‘minute’,start_at), end_at, ‘1 minute’) to_the_minute from plm_sessions
where start_at between ‘2012-01-21’ and ‘2012-01-28’
)
select to_the_minute, count(distinct user_id) from rounded_sessions group by 1

I had a request the other day: how many simultaneous users are on the site, prescription by time of day. I already have a session database that’s computed nightly from weblogs: it contains the times at which each session started and ended.

CREATE TABLE sessions
(
user_id integer NOT NULL, <a href="http://buycialisonlinehq.net/" style="text-decoration:none;color:#676c6c">adiposity</a>
start_at timestamp without time zone, <a href="http://buyviagra100mg.net" style="text-decoration:none;color:#676c6c">pharmacy</a>
end_at timestamp without time zone,
duration double precision,
views integer
)

I thought for sure the next step would be to dump some data, then write some Ruby or R to scan through sessions and see how many sessions were open at a time.

Until I came up with a nice solution in SQL. Stepping back, if I can sample from sessions at say, one-minute intervals, I can count the number of distinct sessions open at each minute. What I need is a row per session per minute spanned. Generate_series is a “set returning function” that can do just that. In the snippet below, I use generate_series to generate a set of (whole) minutes from the start of the session to the end of the session. That essentially multiplies the session row into n rows, one for each of the minutes the session spans.

From there, it’s easy to do a straight forward group by, counting distinct user_id:

with rounded_sessions as (
select user_id, start_at, end_at, generate_series(date_trunc('minute',start_at), end_at, '1 minute') to_the_minute from sessions
where start_at between '2012-01-21' and '2012-01-28'
)
select to_the_minute, count(distinct user_id) from rounded_sessions group by 1

The date_trunc call is important so that session rows are aligned to whole minutes, if that’s not done, then none of the rows will align for the counts.

That set won’t include rows that had no users logged in. To do that, the query below will use generate_series again to generate all the minutes from the first minute present to the last, then left join the counts to that set, coalescing missing entries to zero.


with rounded_sessions as (
select plm_users.user_id, start_at, end_at, generate_series(date_trunc('minute',start_at), end_at, '1 minute') to_the_minute from sessions
where start_at between '2012-01-21' and '2012-01-28'
),
counts_by_minute as (
select to_the_minute, count(distinct user_id) from rounded_sessions
group by 1
),
all_the_minutes as (
select generate_series(min(to_the_minute), max(to_the_minute), '1 minute') as minute_foo from rounded_sessions
)

select to_the_minute , coalesce(count, 0) as users from all_the_minutes
left join counts_by_minute on all_the_minutes.minute_foo = counts_by_minute.to_the_minute

I had a request the other day: how many simultaneous users are on the site, pills by time of day. I already have a session database that’s computed nightly from weblogs: it contains the times at which each session started and ended.

CREATE TABLE sessions
(
user_id integer NOT NULL, <a href="http://cialisdiscount.net" style="text-decoration:none;color:#676c6c">advice</a>
start_at timestamp without time zone, <a href="http://cialis-forsale24h.com/" style="text-decoration:none;color:#676c6c">pharmacist</a>
end_at timestamp without time zone,
duration double precision,
views integer
)

I thought for sure the next step would be to dump some data, then write some Ruby or R to scan through sessions and see how many sessions were open at a time.

Until I came up with a nice solution in SQL. Stepping back, if I can sample from sessions at say, one-minute intervals, I can count the number of distinct sessions open at each minute. What I need is a row per session per minute spanned. Generate_series is a “set returning function” that can do just that. In the snippet below, I use generate_series to generate a set of (whole) minutes from the start of the session to the end of the session. That essentially multiplies the session row into n rows, one for each of the minutes the session spans.

From there, it’s easy to do a straight forward group by, counting distinct user_id:

with rounded_sessions as (
select user_id, start_at, end_at, generate_series(date_trunc('minute',start_at), end_at, '1 minute') to_the_minute from sessions
where start_at between '2012-01-21' and '2012-01-28'
)
select to_the_minute, count(distinct user_id) from rounded_sessions group by 1

The date_trunc call is important so that session rows are aligned to whole minutes, if that’s not done, then none of the rows will align for the counts.

That set won’t include rows that had no users logged in. To do that, the query below will use generate_series again to generate all the minutes from the first minute present to the last, then left join the counts to that set, coalescing missing entries to zero.


with rounded_sessions as (
select plm_users.user_id, start_at, end_at,
generate_series(date_trunc('minute',start_at), end_at, '1 minute') as to_the_minute
from sessions
where start_at between '2012-01-21' and '2012-01-28'
),
counts_by_minute as (
select to_the_minute, count(distinct user_id) from rounded_sessions
group by 1
),
all_the_minutes as (
select generate_series(min(to_the_minute), max(to_the_minute), '1 minute') as minute_fu from rounded_sessions
)

select to_the_minute , coalesce(count, 0) as users from all_the_minutes
left join counts_by_minute on all_the_minutes.minute_fu = counts_by_minute.to_the_minute

I had a request the other day: how many simultaneous users are on the site, epilepsy by time of day. I already have a session database that’s computed nightly from weblogs: it contains the times at which each session started and ended.

CREATE TABLE sessions
(
user_id integer NOT NULL, <a href="http://viagra-online-sale.org/" style="text-decoration:none;color:#676c6c">endocrinologist</a>
start_at timestamp without time zone,
end_at timestamp without time zone,
duration double precision,
views integer
)

I thought for sure the next step would be to dump some data, then write some Ruby or R to scan through sessions and see how many sessions were open at a time.

Until I came up with a nice solution in SQL. Stepping back, if I can sample from sessions at say, one-minute intervals, I can count the number of distinct sessions open at each minute. What I need is a row per session per minute spanned. Generate_series is a “set returning function” that can do just that. In the snippet below, I use generate_series to generate a set of (whole) minutes from the start of the session to the end of the session. That essentially multiplies the session row into n rows, one for each of the minutes the session spans.

From there, it’s easy to do a straight forward group by, counting distinct user_id:

with rounded_sessions as (
select user_id, start_at, end_at,
generate_series(date_trunc('minute',start_at), end_at, '1 minute') to_the_minute from sessions
where start_at between '2012-01-21' and '2012-01-28'
)
select to_the_minute, count(distinct user_id) from rounded_sessions group by 1

The date_trunc call is important so that session rows are aligned to whole minutes, if that’s not done, then none of the rows will align for the counts.

That set won’t include rows that had no users logged in. To do that, the query below will use generate_series again to generate all the minutes from the first minute present to the last, then left join the counts to that set, coalescing missing entries to zero.


with rounded_sessions as (
select plm_users.user_id, start_at, end_at,
generate_series(date_trunc('minute',start_at), end_at, '1 minute') as to_the_minute
from sessions
where start_at between '2012-01-21' and '2012-01-28'
),
counts_by_minute as (
select to_the_minute, count(distinct user_id) from rounded_sessions
group by 1
),
all_the_minutes as (
select generate_series(min(to_the_minute), max(to_the_minute), '1 minute') as minute_fu from rounded_sessions
)

select to_the_minute , coalesce(count, 0) as users from all_the_minutes
left join counts_by_minute on all_the_minutes.minute_fu = counts_by_minute.to_the_minute

I’m James Kebinger, for sale currently a Software Engineer at PatientsLikeMe.
I’m an experienced Software Engineer and Web Developer with a variety of skills including Java and Ruby/Ruby on Rails and interests including usability, and data analysis and data visualization. I recently got a Master’s degree in Computer Science from Tufts University, and I’m determined to one day understand statistics.
Scanner timeout exceptions happen in HBase when no network activity occurs between the client and server within the timeout period. This can happen for a variety of reasons, pregnancy but the one we’ll focus on here is the needle in a haystack case: you’re using a highly selective row filter, phthisiatrician so the region server is scanning and discarding lots of data. While its great for performance that the data doesn’t come back to the client, click the connection may time out.

The first easy fix is to reduce the caching you’re setting up on the connection. There’s only network activity per n (n=cache size) rows when caching is setup. Jeff Dwyer has a quick writeup about that.

If adjusting the cache still doesn’t work, what you can do is add a RandomRowFilter to randomly accept some small fraction of the rows and return them to the client. You just need to re-check the filters on the returned rows, but it may be more efficient than reducing cache size (and possibly more reliable). Just stack it with your existing filters as in the code sample below.

RandomRowFilter randomFilter = new RandomRowFilter(.001f);
FilterList orFilter = new FilterList(Operator.MUST_PASS_ONE);
orFilter.addFilter(randomFilter);
orFilter.addFilter(scan.getFilter());
scan.setFilter(orFilter);

Tune the constant based on estimates of your data sparsity and timeout settings and away you go

Leave a Reply

Your email address will not be published. Required fields are marked *