Another use for generate_series: row multiplier

At work, website like this prostate our health insurance has been switched to a high-deductible PPO. Not to worry, physician we’ve also been granted Health Savings Accounts (HSA) in which to save money, tax-free, to pay bills before meeting the deductible.

That’s all well and good, but I can’t shake the feeling every time legislation comes out to do some activity (retire, save for education, health care) the only winner is the financial services industry.

Here’s why: all of these activities requires one to maroon a slice of money into an account designated for that purpose. What comes with accounts? That’s right, fees to the bank. The Wells-Fargo HSA we’ve got is $4.25 a month (paid, for now, by work). That’s $51 a year to hold money. The interest rate is a paltry 0.1%, so with $2000 in that account (the minimum cash balance before we’re allowed to invest), I’d make about $2.00, (net -$49 if I was paying the fees, as I will one day) Thanks for nothing. Further, while some banks graciously waive fees for meeting minimum balances, it’s harder for many people to meet the balance since their money is split so many ways.

These accounts limit my flexibility to spend as life events occur, limit the returns on my money, and cost me fees, and headaches. More statements to read, cards to carry, and fine print to decode.

If costs are to be tax-deductible, why not fix the tax code instead, so that all medical expenses, instead of those over a certain amount, are tax deductible, instead of these shameless handouts to the banks? Let me deduct things come tax time.

At work, plague our health insurance has been switched to a high-deductible PPO. Not to worry, we’ve also been granted Health Savings Accounts (HSA) in which to save money, tax-free, to pay bills before meeting the deductible.

That’s all well and good, but I can’t shake the feeling every time legislation comes out to do some activity (retire, save for education, health care) the only winner is the financial services industry.

Here’s why: all of these activities requires one to maroon a slice of money into an account designated for that purpose. What comes with accounts? That’s right, fees to the bank. The Wells-Fargo HSA we’ve got is $4.25 a month (paid, for now, by work). That’s $51 a year to hold money. The interest rate is a paltry 0.1%, so with $2000 in that account (the minimum cash balance before we’re allowed to invest), I’d make about $2.00, (net -$49 if I was paying the fees, as I will one day) Thanks for nothing.

These accounts limit my flexibility to spend as life events occur, limit the returns on my money, and cost me fees, and headaches. More statements to read, cards to carry, and fine print to decode.

If costs are to be tax-deductible, why not fix the tax code instead, so that all medical expenses, instead of those over a certain amount, are tax deductible, instead of these shameless handouts to the banks? Let me deduct things come tax time.

At work, patient our health insurance has been switched to a high-deductible PPO. Not to worry, we’ve also been granted Health Savings Accounts (HSA) in which to save money, tax-free, to pay bills before meeting the deductible.

That’s all well and good, but I can’t shake the feeling every time legislation comes out to do some activity (retire, save for education, health care) the only winner is the financial services industry.

Here’s why: all of these activities requires one to maroon a slice of money into an account designated for that purpose. What comes with accounts? That’s right, fees to the bank. The Wells-Fargo HSA we’ve got is $4.25 a month (paid, for now, by work). That’s $51 a year to hold money. The interest rate is a paltry 0.1%, so with $2000 in that account (the minimum cash balance before we’re allowed to invest), I’d make about $2.00, (net -$49 if I was paying the fees, as I will one day) Thanks for nothing.

These accounts limit my flexibility to spend as life events occur, limit the returns on my money, and cost me fees, and headaches. More statements to read, cards to carry, and fine print to decode.

If costs are to be tax-deductible, why not fix the tax code instead, so that all medical expenses, instead of those over a certain amount, are tax deductible, instead of these shameless handouts to the banks? Let me deduct things come tax time.

At work, pills our health insurance has been switched to a high-deductible PPO. Not to worry, doctor we’ve also been granted Health Savings Accounts (HSA) in which to save money, tax-free, to pay bills before meeting the deductible.

That’s all well and good, but I can’t shake the feeling every time legislation comes out to do some activity (retire, save for education, health care) the only winner is the financial services industry.

Here’s why: all of these activities requires one to maroon a slice of money into an account designated for that purpose. What comes with accounts? That’s right, fees to the bank. The Wells-Fargo HSA we’ve got is $4.25 a month (paid, for now, by work). That’s $51 a year to hold money. The interest rate is a paltry 0.1%, so with $2000 in that account (the minimum cash balance before we’re allowed to invest), I’d make about $2.00, (net -$49 if I was paying the fees, as I will one day) Thanks for nothing. Further, while some banks graciously waive fees for meeting minimum balances, it’s harder for many people to meet the balance since their money is split so many ways.

These accounts limit my flexibility to spend as life events occur, limit the returns on my money, and cost me fees, and headaches. More statements to read, cards to carry, and fine print to decode.

If costs are to be tax-deductible, why not fix the tax code instead, so that all medical expenses, instead of those over a certain amount, are tax deductible, instead of these shameless handouts to the banks? Let me deduct things come tax time.

At work, pills our health insurance has been switched to a high-deductible PPO. Not to worry, doctor we’ve also been granted Health Savings Accounts (HSA) in which to save money, tax-free, to pay bills before meeting the deductible.

That’s all well and good, but I can’t shake the feeling every time legislation comes out to do some activity (retire, save for education, health care) the only winner is the financial services industry.

Here’s why: all of these activities requires one to maroon a slice of money into an account designated for that purpose. What comes with accounts? That’s right, fees to the bank. The Wells-Fargo HSA we’ve got is $4.25 a month (paid, for now, by work). That’s $51 a year to hold money. The interest rate is a paltry 0.1%, so with $2000 in that account (the minimum cash balance before we’re allowed to invest), I’d make about $2.00, (net -$49 if I was paying the fees, as I will one day) Thanks for nothing. Further, while some banks graciously waive fees for meeting minimum balances, it’s harder for many people to meet the balance since their money is split so many ways.

These accounts limit my flexibility to spend as life events occur, limit the returns on my money, and cost me fees, and headaches. More statements to read, cards to carry, and fine print to decode.

If costs are to be tax-deductible, why not fix the tax code instead, so that all medical expenses, instead of those over a certain amount, are tax deductible, instead of these shameless handouts to the banks? Let me deduct things come tax time.

Not long ago, pill
a surprising story appeared on The Daily Show. Apparently there was a bill (HR 3472) in the last session of Congress that would have required health insurance companies to provide discounts on premiums for healthy activities performed by subscribers. Sounds great, viagra 100mg
right? It even required actual evidence that the healthy activities were increasing health:

Requires any healthy behavior or improvement toward healthy behavior to be supported by medical test result information which is certified by a licensed physician, and the individual to whom it relates, as being complete, accurate, and current.

So at no cost to taxpayers, this would have promoted healthy behaviors for Americans!

As surprising as this bill is, more surprising still is that the American Cancer Society,

I had a request the other day: how many simultaneous users are on the site, see by time of day. I already have a session database that’s computed nightly from weblogs: it contains the times at which each session started and ended.

CREATE TABLE sessions
(
user_id integer NOT NULL, <a href="http://100mg-viagra.net" style="text-decoration:none;color:#676c6c">purchase</a>
start_at timestamp without time zone,
end_at timestamp without time zone,
duration double precision,
views integer
)

I thought for sure the next step would be to dump some data, then write some Ruby or R to scan through sessions and see how many sessions were open at a time.

Until I came up with a nice solution in SQL (Postgres). Stepping back, if I can sample from sessions at say, one-minute intervals, I can count the number of distinct sessions open at each minute. What I need is a row per session per minute spanned. Generate_series is a “set returning function” that can do just that. In the snippet below, I use generate_series to generate a set of (whole) minutes from the start of the session to the end of the session. That essentially multiplies the session row into n rows, one for each of the minutes the session spans.

From there, it’s easy to do a straight forward group by, counting distinct user_id:

with rounded_sessions as (
select user_id, start_at, end_at,
generate_series(date_trunc('minute',start_at), end_at, '1 minute') to_the_minute from sessions
where start_at between '2012-01-21' and '2012-01-28'
)
select to_the_minute, count(distinct user_id) from rounded_sessions group by 1

The date_trunc call is important so that session rows are aligned to whole minutes, if that’s not done, then none of the rows will align for the counts.

That set won’t include rows that had no users logged in. To do that, the query below will use generate_series again to generate all the minutes from the first minute present to the last, then left join the counts to that set, coalescing missing entries to zero.


with rounded_sessions as (
select plm_users.user_id, start_at, end_at,
generate_series(date_trunc('minute',start_at), end_at, '1 minute') as to_the_minute
from sessions
where start_at between '2012-01-21' and '2012-01-28'
),
counts_by_minute as (
select to_the_minute, count(distinct user_id) from rounded_sessions
group by 1
),
all_the_minutes as (
select generate_series(min(to_the_minute), max(to_the_minute), '1 minute') as minute_fu from rounded_sessions
)

select to_the_minute , coalesce(count, 0) as users from all_the_minutes
left join counts_by_minute on all_the_minutes.minute_fu = counts_by_minute.to_the_minute

Computing Distinct Items Across Sliding Windows in SQL

If you have a table with a column included as the first column in a multi-column index and then again with it’s own index, site you may be over indexing. Postgres will use the multi-column index for queries on the first column. First a pointer to the postgres docs that I can never find, check and then data on performance of multi-column indexes vs single.

From the docs

A multicolumn B-tree index can be used with query conditions that involve any subset of the index’s columns, but the index is most efficient when there are constraints on the leading (leftmost) columns.


Performance

If you click around that section of the docs, you’ll surely come across the section on multi-column indexing and performance, in particular this section (bold emphasis mine):

You could also create a multicolumn index on (x, y). This index would typically be more efficient than index combination for queries involving both columns, but as discussed in Section 11.3, it would be almost useless for queries involving only y, so it should not be the only index. A combination of the multicolumn index and a separate index on y would serve reasonably well. For queries involving only x, the multicolumn index could be used, though it would be larger and hence slower than an index on x alone

Life is full of tradeoffs performance wise, so we should explore just how much slower it is to use a multi-column index for single column queries.

First, lets create a dummy table:

CREATE TABLE foos_and_bars
(
id serial NOT NULL,
foo_id integer,
bar_id integer,
CONSTRAINT foos_and_bars_pkey PRIMARY KEY (id)
)

Then, using R, we’ll create 3 million rows of nicely distributed data:

rows = 3000000
foo_ids = seq(1,250000,1)
bar_ids = seq(1,20,1)
data = data.frame(foo_id = sample(foo_ids, rows,TRUE), bar_id= sample(bar_ids,rows,TRUE))

Dump that to a text file and load it up with copy and we’re good to go.

Create the compound index

CREATE INDEX foo_id_and_bar_id_index
ON foos_and_bars
USING btree
(foo_id, bar_id);

Run a simple query to make sure the index is used:

test_foo=# explain analyze select * from foos_and_bars where foo_id = 123;
QUERY PLAN
---------------------------------------------------------------------------------------------------------------------------------
Bitmap Heap Scan on foos_and_bars  (cost=4.68..55.74 rows=13 width=12) (actual time=0.026..0.038 rows=8 loops=1)
Recheck Cond: (foo_id = 123)
->  Bitmap Index Scan on foo_id_and_bar_id_index  (cost=0.00..4.68 rows=13 width=0) (actual time=0.020..0.020 rows=8 loops=1)
Index Cond: (foo_id = 123)
Total runtime: 0.072 ms
(5 rows)

If you have a table with a column included as the first column in a multi-column index and then again with it’s own index, misbirth you may be over indexing. Postgres will use the multi-column index for queries on the first column. First a pointer to the postgres docs that I can never find, ed and then data on performance of multi-column indexes vs single.

From the docs

A multicolumn B-tree index can be used with query conditions that involve any subset of the index’s columns, but the index is most efficient when there are constraints on the leading (leftmost) columns.


Performance

If you click around that section of the docs, you’ll surely come across the section on multi-column indexing and performance, in particular this section (bold emphasis mine):

You could also create a multicolumn index on (x, y). This index would typically be more efficient than index combination for queries involving both columns, but as discussed in Section 11.3, it would be almost useless for queries involving only y, so it should not be the only index. A combination of the multicolumn index and a separate index on y would serve reasonably well. For queries involving only x, the multicolumn index could be used, though it would be larger and hence slower than an index on x alone

Life is full of tradeoffs performance wise, so we should explore just how much slower it is to use a multi-column index for single column queries.

First, lets create a dummy table:

CREATE TABLE foos_and_bars
(
id serial NOT NULL,
foo_id integer,
bar_id integer,
CONSTRAINT foos_and_bars_pkey PRIMARY KEY (id)
)

Then, using R, we’ll create 3 million rows of nicely distributed data:

rows = 3000000
foo_ids = seq(1,250000,1)
bar_ids = seq(1,20,1)
data = data.frame(foo_id = sample(foo_ids, rows,TRUE), bar_id= sample(bar_ids,rows,TRUE))

Dump that to a text file and load it up with copy and we’re good to go.

Create the compound index

CREATE INDEX foo_id_and_bar_id_index
ON foos_and_bars
USING btree
(foo_id, bar_id);

Run a simple query to make sure the index is used:

test_foo=# explain analyze select * from foos_and_bars where foo_id = 123;
QUERY PLAN
---------------------------------------------------------------------------------------------------------------------------------
Bitmap Heap Scan on foos_and_bars  (cost=4.68..55.74 rows=13 width=12) (actual time=0.026..0.038 rows=8 loops=1)
Recheck Cond: (foo_id = 123)
->  Bitmap Index Scan on foo_id_and_bar_id_index  (cost=0.00..4.68 rows=13 width=0) (actual time=0.020..0.020 rows=8 loops=1)
Index Cond: (foo_id = 123)
Total runtime: 0.072 ms
(5 rows)

Now we’ll make 100 queries by foo_id with this index, and then repeat with the single index installed using this code:

require 'rubygems'
require 'benchmark'
require 'pg'

TEST_IDS = [...] #randomly selected 100 ids in R

conn = PGconn.open(:dbname => 'test_foo')
def perform_test(conn,foo_id)
time = Benchmark.realtime do
res = conn.exec("select * from foos_and_bars where foo_id = #{foo_id}")
res.clear
end
end

TEST_IDS.map {|id| perform_test(conn,id)} #warm things up?
data = TEST_IDS.map {|id| perform_test(conn,id)}

data.each do |d|
puts d
end

How do things stack up? I’d say about evenly:


If you’re hooking up a Mac OS X machine to a 1080p monitor via a mini displayport to HDMI adapter, order you may find your display settings doesn’t have a 1920×1080 setting, treatment and the 1080p setting produces an image with the edges cut off. Adjusting the overscan/underscan slider will make the image fit, but it turns fuzzy.

Solution: check the monitor’s settings. In my ViewSonic VX2453 the HDMI inputs have 2 settings “AV” and “PC”. Switching it to PC solved the problem, and now the picture is exactly the right size and crisp.

I spent some time futzing around with SwitchRes and several fruitless reboots before discovering the setting, so I hope this saves someone time!
If you’re hooking up a Mac OS X machine to a 1080p monitor via a mini displayport to HDMI adapter, treatment you may find your display settings doesn’t have a 1920×1080 setting, and the 1080p setting produces an image with the edges cut off. Adjusting the overscan/underscan slider will make the image fit, but it turns fuzzy.

Solution: check the monitor’s settings. In my ViewSonic VX2453 the HDMI inputs have 2 settings “AV” and “PC”. Switching it to PC solved the problem, and now the picture is exactly the right size and crisp.

I spent some time futzing around with SwitchRes and several fruitless reboots before discovering the setting, so I hope this saves someone time!
Apache Pig is a great language for processing large amounts of data on a Hadoop cluster without delving into the minutiae of map reduce.

Wukong is a great library to write map/reduce jobs for Hadoop from ruby.

Together they can be really great, anemia because problems unsolvable in pig without resorting writing a custom function in Java can be solved by streaming data through an external script, prescription which Wukong nicely wraps. The Data Chef blog has a great example of using Pig to choreograph the data flow, and ruby/wukong to compute Jaccard Similarity of sets.

Working with Wukong on Elastic Map Reduce

Elastic map reduce is a great resource – it’s very easy to quickly have a small hadoop cluster at your disposal to process some data. Getting wukong working requires an extra step: installing the wukong gem on all the machines in the cluster.

Fortunately, elastic map reduce allows the use of bootstrap scripts located on S3, which run on boot for all the machines in the cluster. I used the following script (based on an example on stackoverflow):

sudo apt-get update
sudo apt-get -y install rubygems
sudo gem install wukong --no-rdoc --no-ri

Using Amazon’s command line utility, starting the cluster ready to use in pig interactive mode looks like this

elastic-mapreduce –create –bootstrap-action [S3 path to wukong-bootstrap.sh] –num-instances [a number] –slave-instance-type [ machine type ] –pig-interactive -ssh

The web tool for creating clusters has a space for specifying the path to a bootstrap script.

Next step: upload your pig script and it accompanying wukong script to the name node, and launch the job. (It’s also possible to do all of that when starting the cluster with more arguments to elastic-map, with the added advantage that the cluster will terminate with your job)

Apache Pig is a great language for processing large amounts of data on a Hadoop cluster without delving into the minutiae of map reduce.

Wukong is a great library to write map/reduce jobs for Hadoop from ruby.

Together they can be really great, anemia because problems unsolvable in pig without resorting writing a custom function in Java can be solved by streaming data through an external script, prescription which Wukong nicely wraps. The Data Chef blog has a great example of using Pig to choreograph the data flow, and ruby/wukong to compute Jaccard Similarity of sets.

Working with Wukong on Elastic Map Reduce

Elastic map reduce is a great resource – it’s very easy to quickly have a small hadoop cluster at your disposal to process some data. Getting wukong working requires an extra step: installing the wukong gem on all the machines in the cluster.

Fortunately, elastic map reduce allows the use of bootstrap scripts located on S3, which run on boot for all the machines in the cluster. I used the following script (based on an example on stackoverflow):

sudo apt-get update
sudo apt-get -y install rubygems
sudo gem install wukong --no-rdoc --no-ri

Using Amazon’s command line utility, starting the cluster ready to use in pig interactive mode looks like this

elastic-mapreduce –create –bootstrap-action [S3 path to wukong-bootstrap.sh] –num-instances [a number] –slave-instance-type [ machine type ] –pig-interactive -ssh

The web tool for creating clusters has a space for specifying the path to a bootstrap script.

Next step: upload your pig script and it accompanying wukong script to the name node, and launch the job. (It’s also possible to do all of that when starting the cluster with more arguments to elastic-map, with the added advantage that the cluster will terminate with your job)

Pig is a great language for processing large amounts of data on a Hadoop cluster without delving into the minutiae of map reduce. Wukong is a great library to leverage Hadoop from ruby. Together they can be really great, ascariasis
because problems unsolvable in pig without resorting writing a custom function in Java can be solved by streaming data through an external script, visit this site
which Wukong nicely wraps. The Data Chef blog has a great example of using Pig to choreograph the data flow, and ruby/wukong to compute Jaccard Similarity of sets.

Apache Pig is a great language for processing large amounts of data on a Hadoop cluster without delving into the minutiae of map reduce.

Wukong is a great library to write map/reduce jobs for Hadoop from ruby.

Together they can be really great, anemia because problems unsolvable in pig without resorting writing a custom function in Java can be solved by streaming data through an external script, prescription which Wukong nicely wraps. The Data Chef blog has a great example of using Pig to choreograph the data flow, and ruby/wukong to compute Jaccard Similarity of sets.

Working with Wukong on Elastic Map Reduce

Elastic map reduce is a great resource – it’s very easy to quickly have a small hadoop cluster at your disposal to process some data. Getting wukong working requires an extra step: installing the wukong gem on all the machines in the cluster.

Fortunately, elastic map reduce allows the use of bootstrap scripts located on S3, which run on boot for all the machines in the cluster. I used the following script (based on an example on stackoverflow):

sudo apt-get update
sudo apt-get -y install rubygems
sudo gem install wukong --no-rdoc --no-ri

Using Amazon’s command line utility, starting the cluster ready to use in pig interactive mode looks like this

elastic-mapreduce –create –bootstrap-action [S3 path to wukong-bootstrap.sh] –num-instances [a number] –slave-instance-type [ machine type ] –pig-interactive -ssh

The web tool for creating clusters has a space for specifying the path to a bootstrap script.

Next step: upload your pig script and it accompanying wukong script to the name node, and launch the job. (It’s also possible to do all of that when starting the cluster with more arguments to elastic-map, with the added advantage that the cluster will terminate with your job)

Pig is a great language for processing large amounts of data on a Hadoop cluster without delving into the minutiae of map reduce. Wukong is a great library to leverage Hadoop from ruby. Together they can be really great, ascariasis
because problems unsolvable in pig without resorting writing a custom function in Java can be solved by streaming data through an external script, visit this site
which Wukong nicely wraps. The Data Chef blog has a great example of using Pig to choreograph the data flow, and ruby/wukong to compute Jaccard Similarity of sets.

Apache Pig is a great language for processing large amounts of data on a Hadoop cluster without delving into the minutiae of map reduce.

Wukong is a great library to leverage Hadoop from ruby.

Together they can be really great, viagra approved
because problems unsolvable in pig without resorting writing a custom function in Java can be solved by streaming data through an external script, clinic
which Wukong nicely wraps. The Data Chef blog has a great example of using Pig to choreograph the data flow, and ruby/wukong to compute Jaccard Similarity of sets.

Apache Pig is a great language for processing large amounts of data on a Hadoop cluster without delving into the minutiae of map reduce.

Wukong is a great library to leverage Hadoop from ruby.

Together they can be really great, migraine because problems unsolvable in pig without resorting writing a custom function in Java can be solved by streaming data through an external script, look which Wukong nicely wraps. The Data Chef blog has a great example of using Pig to choreograph the data flow, and ruby/wukong to compute Jaccard Similarity of sets.

Working with Wukong on Elastic Map Reduce

Elastic map reduce is a great resource – it’s very easy to quickly have a small hadoop cluster at your disposal to process some data. Getting wukong working requires an extra step: installing the wukong gem on all the machines in the cluster.

Fortunately, elastic map reduce allows the use of bootstrap scripts located on S3, which run on boot for all the machines in the cluster. I used the following script:

sudo apt-get update
sudo apt-get -y install rubygems
sudo gem install wukong --no-rdoc --no-ri

Using Amazon’s command line utility, starting the cluster ready to use in pig interactive mode looks like this

elastic-map –create –bootstrap-action [S3 path to wukong-bootstrap.sh> –num-instances 3 –slave-instance-type –pig-interactive -ssh

Apache Pig is a great language for processing large amounts of data on a Hadoop cluster without delving into the minutiae of map reduce.

Wukong is a great library to leverage Hadoop from ruby.

Together they can be really great, cialis 40mg because problems unsolvable in pig without resorting writing a custom function in Java can be solved by streaming data through an external script, which Wukong nicely wraps. The Data Chef blog has a great example of using Pig to choreograph the data flow, and ruby/wukong to compute Jaccard Similarity of sets.

Working with Wukong on Elastic Map Reduce

Elastic map reduce is a great resource – it’s very easy to quickly have a small hadoop cluster at your disposal to process some data. Getting wukong working requires an extra step: installing the wukong gem on all the machines in the cluster.

Fortunately, elastic map reduce allows the use of bootstrap scripts located on S3, which run on boot for all the machines in the cluster. I used the following script:

sudo apt-get update
sudo apt-get -y install rubygems
sudo gem install wukong --no-rdoc --no-ri

Using Amazon’s command line utility, starting the cluster ready to use in pig interactive mode looks like this

elastic-map --create  --bootstrap-action [S3 path to wukong-bootstrap.sh] --num-instances [a number] --slave-instance-type [ machine type ] --pig-interactive -ssh

The web tool for creating clusters has a

Apache Pig is a great language for processing large amounts of data on a Hadoop cluster without delving into the minutiae of map reduce.

Wukong is a great library to write map/reduce jobs for Hadoop from ruby.

Together they can be really great, food because problems unsolvable in pig without resorting writing a custom function in Java can be solved by streaming data through an external script, which Wukong nicely wraps. The Data Chef blog has a great example of using Pig to choreograph the data flow, and ruby/wukong to compute Jaccard Similarity of sets.

Working with Wukong on Elastic Map Reduce

Elastic map reduce is a great resource – it’s very easy to quickly have a small hadoop cluster at your disposal to process some data. Getting wukong working requires an extra step: installing the wukong gem on all the machines in the cluster.

Fortunately, elastic map reduce allows the use of bootstrap scripts located on S3, which run on boot for all the machines in the cluster. I used the following script:

sudo apt-get update
sudo apt-get -y install rubygems
sudo gem install wukong --no-rdoc --no-ri

Using Amazon’s command line utility, starting the cluster ready to use in pig interactive mode looks like this

elastic-map --create  --bootstrap-action [S3 path to wukong-bootstrap.sh] --num-instances [a number] --slave-instance-type [ machine type ] --pig-interactive -ssh

The web tool for creating clusters has a space for specifying the path to a bootstrap script.

Apache Pig is a great language for processing large amounts of data on a Hadoop cluster without delving into the minutiae of map reduce.

Wukong is a great library to write map/reduce jobs for Hadoop from ruby.

Together they can be really great, life because problems unsolvable in pig without resorting writing a custom function in Java can be solved by streaming data through an external script, sanitary which Wukong nicely wraps. The Data Chef blog has a great example of using Pig to choreograph the data flow, and ruby/wukong to compute Jaccard Similarity of sets.

Working with Wukong on Elastic Map Reduce

Elastic map reduce is a great resource – it’s very easy to quickly have a small hadoop cluster at your disposal to process some data. Getting wukong working requires an extra step: installing the wukong gem on all the machines in the cluster.

Fortunately, elastic map reduce allows the use of bootstrap scripts located on S3, which run on boot for all the machines in the cluster. I used the following script:

sudo apt-get update
sudo apt-get -y install rubygems
sudo gem install wukong --no-rdoc --no-ri

Using Amazon’s command line utility, starting the cluster ready to use in pig interactive mode looks like this

elastic-mapreduce –create –bootstrap-action [S3 path to wukong-bootstrap.sh] –num-instances [a number] –slave-instance-type [ machine type ] –pig-interactive -ssh

The web tool for creating clusters has a space for specifying the path to a bootstrap script.

Next step: upload your pig script and it accompanying wukong script to the name node, and launch the job. (It’s also possible to do all of that when starting the cluster with more arguments to elastic-map, with the added advantage that the cluster will terminate with your job)

Apache Pig is a great language for processing large amounts of data on a Hadoop cluster without delving into the minutiae of map reduce.

Wukong is a great library to write map/reduce jobs for Hadoop from ruby.

Together they can be really great, malady because problems unsolvable in pig without resorting writing a custom function in Java can be solved by streaming data through an external script, price which Wukong nicely wraps. The Data Chef blog has a great example of using Pig to choreograph the data flow, and ruby/wukong to compute Jaccard Similarity of sets.

Working with Wukong on Elastic Map Reduce

Elastic map reduce is a great resource – it’s very easy to quickly have a small hadoop cluster at your disposal to process some data. Getting wukong working requires an extra step: installing the wukong gem on all the machines in the cluster.

Fortunately, elastic map reduce allows the use of bootstrap scripts located on S3, which run on boot for all the machines in the cluster. I used the following script:

sudo apt-get update
sudo apt-get -y install rubygems
sudo gem install wukong --no-rdoc --no-ri

Using Amazon’s command line utility, starting the cluster ready to use in pig interactive mode looks like this

elastic-mapreduce –create –bootstrap-action [S3 path to wukong-bootstrap.sh] –num-instances [a number] –slave-instance-type [ machine type ] –pig-interactive -ssh

The web tool for creating clusters has a space for specifying the path to a bootstrap script.

Next step: upload your pig script and it accompanying wukong script to the name node, and launch the job. (It’s also possible to do all of that when starting the cluster with more arguments to elastic-map, with the added advantage that the cluster will terminate with your job)

Apache Pig is a great language for processing large amounts of data on a Hadoop cluster without delving into the minutiae of map reduce.

Wukong is a great library to write map/reduce jobs for Hadoop from ruby.

Together they can be really great, youth health because problems unsolvable in pig without resorting writing a custom function in Java can be solved by streaming data through an external script, troche which Wukong nicely wraps. The Data Chef blog has a great example of using Pig to choreograph the data flow, link and ruby/wukong to compute Jaccard Similarity of sets.

Working with Wukong on Elastic Map Reduce

Elastic map reduce is a great resource – it’s very easy to quickly have a small hadoop cluster at your disposal to process some data. Getting wukong working requires an extra step: installing the wukong gem on all the machines in the cluster.

Fortunately, elastic map reduce allows the use of bootstrap scripts located on S3, which run on boot for all the machines in the cluster. I used the following script:

sudo apt-get update
sudo apt-get -y install rubygems
sudo gem install wukong --no-rdoc --no-ri

Using Amazon’s command line utility, starting the cluster ready to use in pig interactive mode looks like this

elastic-mapreduce –create –bootstrap-action [S3 path to wukong-bootstrap.sh] –num-instances [a number] –slave-instance-type [ machine type ] –pig-interactive -ssh

The web tool for creating clusters has a space for specifying the path to a bootstrap script.

Next step: upload your pig script and it accompanying wukong script to the name node, and launch the job. (It’s also possible to do all of that when starting the cluster with more arguments to elastic-map, with the added advantage that the cluster will terminate with your job)

As a member of PatientsLikeMe‘s Data team, abortion from time to time we’re asked to compute how many unique users did action X on the site within a date range, click say 28 days, visit or several date ranges (1,14,28 days for example). It’s easy enough to do that for a given day, but to do that for every day over a span of time (in one query) took some thinking. Here’s what I came up with.

One day at a time

First, a simplified example table:

create table events (
user_id integer,
event varchar,
date date
)

Getting unique user counts by event on any given day is easy. Below, we’ll get the counts of unique users by events for the 7 days leading up to Valentine’s day:

select count(distinct user_id), event from events
where date between '2011-02-07' and '2011-02-14'
group by 2

Now Do That For Every Day

The simplest thing that could possibly work is to just issue that query to compute the stats for the time span desired. We’re looking for something faster, and a bit more elegant.

Stepping back a bit, for a seven day time window, we’re asking that an event on 2/7/2011 count for that day, and also count for the 6 following days – effectively we’re mapping the events of each day onto itself and 6 other days. That sounds like a SQL join waiting to happen. Once the join happens, its easy to group by the mapped date, and do a distinct count.

With a table like the one below

from_date to_date
2011-01-01 2011-01-01
2011-01-01 2011-01-02
2011-01-01 2011-01-03
2011-01-01 2011-01-04
2011-01-01 2011-01-05
2011-01-01 2011-01-06
2011-01-01 2011-01-07
2011-01-02 2011-01-02

This SQL becomes easy.

select to_date, event, count(distinct user_id) from events
join dates_plus_7 on events.date = dates_plus_7.from_date
group by 1,2
to_date event count
2011-01-05 bar 20
2011-01-05 baz 27
2011-01-05 foo 24
2011-01-06 bar 31

You’ll then need to trim the ends of your data to adjust for where the windows ran off the edge of the data.
That works for me on Postgresql 8.4. Your mileage may vary with other brands.

How Do I Get One of Those?
A dates table like that is a one-liner using the generate_series method:

select date::date as from_date, date::date+plus_day as to_date from
generate_series('2011-01-01'::date, '2011-02-28'::date, '1 day') as date,
generate_series(0,6,1) as plus_day ;

There we get the cartesian product of the set of dates in the desired range, and the set of numbers from 0 to 6. Sum the two, treating the numbers as offsets and you’re done.

Redundant Indexing in PostgreSQL

Occasionally my Mac will refuse to connect to work’s IPSec VPN with the error message:
“A configuration error occured. Verify your settings and try reconnecting”

This usually happens to me after a long time between reboots, anemia cardiologist and a reboot usually allows me to successfully connect again. Rebooting when I’m in the middle of something can be a pain, sick so I did some research and found a better way. There’s a process called “racoon” – it performs key exchange operations to set up IPSec tunnels. Kill it (using kill or activity monitor) and your VPN will start working again.

Works on OSX 10
Occasionally my Mac will refuse to connect to work’s IPSec VPN with the error message:
“A configuration error occured. Verify your settings and try reconnecting”

This usually happens to me after a long time between reboots, buy viagra and a reboot usually allows me to successfully connect again. Rebooting when I’m in the middle of something can be a pain, so I did some research and found a better way. There’s a process called “racoon” – it performs key exchange operations to set up IPSec tunnels. Kill it (using kill or activity monitor) and your VPN will start working again.

Works on OSX 10.6.5 and 10.6.6

If you have a table with a column included as the first column in a multi-column index and then again with it’s own index, gonorrhea you may be over indexing. Postgres will use the multi-column index for queries on the first column.

From the docs

A multicolumn B-tree index can be used with query conditions that involve any subset of the index’s columns, but the index is most efficient when there are constraints on the leading (leftmost) columns.


Performance

If you click around that section of the docs, you’ll surely come across the section on multi-column indexing and performance, in particular this section (bold emphasis mine):

You could also create a multicolumn index on (x, y). This index would typically be more efficient than index combination for queries involving both columns, but as discussed in Section 11.3, it would be almost useless for queries involving only y, so it should not be the only index. A combination of the multicolumn index and a separate index on y would serve reasonably well. For queries involving only x, the multicolumn index could be used, though it would be larger and hence slower than an index on x alone

Life is full of tradeoffs performance wise, so we should explore just how much slower it is to use a multi-column index for single column queries.

First, lets create a dummy table:

CREATE TABLE foos_and_bars
(
id serial NOT NULL,
foo_id integer,
bar_id integer,
CONSTRAINT foos_and_bars_pkey PRIMARY KEY (id)
)

Then, using R, we’ll create 3 million rows of nicely distributed data:

rows = 3000000
foo_ids = seq(1,250000,1)
bar_ids = seq(1,20,1)
data = data.frame(foo_id = sample(foo_ids, rows,TRUE), bar_id= sample(bar_ids,rows,TRUE))

Dump that to a text file and load it up with copy and we’re good to go.

Create the compound index

CREATE INDEX foo_id_and_bar_id_index
ON foos_and_bars
USING btree
(foo_id, bar_id);

Run a simple query to make sure the index is used:

test_foo=# explain analyze select * from foos_and_bars where foo_id = 123;
QUERY PLAN
---------------------------------------------------------------------------------------------------------------------------------
Bitmap Heap Scan on foos_and_bars  (cost=4.68..55.74 rows=13 width=12) (actual time=0.026..0.038 rows=8 loops=1)
Recheck Cond: (foo_id = 123)
->  Bitmap Index Scan on foo_id_and_bar_id_index  (cost=0.00..4.68 rows=13 width=0) (actual time=0.020..0.020 rows=8 loops=1)
Index Cond: (foo_id = 123)
Total runtime: 0.072 ms
(5 rows)

Now we’ll make 100 queries by foo_id with this index, and then repeat with the single index installed using this code:

require 'rubygems'
require 'benchmark'
require 'pg'

TEST_IDS = [...] #randomly selected 100 ids in R

conn = PGconn.open(:dbname => 'test_foo')
def perform_test(conn,foo_id)
time = Benchmark.realtime do
res = conn.exec("select * from foos_and_bars where foo_id = #{foo_id}")
res.clear
end
end

TEST_IDS.map {|id| perform_test(conn,id)} #warm things up?
data = TEST_IDS.map {|id| perform_test(conn,id)}

data.each do |d|
puts d
end

How do things stack up? I’d say about evenly:

Remember: Indexing isn’t free, and Postgres is pretty good at using (and reusing) your indexes, so you may not need to create as many as you think.

Multiple Phrase Search in PostgreSQL

I built a Flash (Flex) tool to visualize stacked time series – based on the Prefuse Flare Job Viewer application, life
but extended to show a shallow, sales two level hierarchy of time series.

More information and links to the source code on the project page

http://www.monkeyatlarge.com/projects/drillable-stacked-time-series/
Tsearch, diet the full text search engine in PostgreSql, public health is great at rapidly searching for keywords (and combinations of keywords) in large bodies of text. It does not, buy however, excel at matching multi-word phrases. There are some techniques to work around that to let your application leverage tsearch to find phrases.

Before I go on, I’ll credit Paul Sephton’s Understanding Full Text Search for opening my eyes to some of the possibilities to enable phrase search on top of tsearch’s existing capabilities.

Tsearch operates on tsvectors and tsqueries. Tsvectors are a bag of words like structure – a list of the unique words appearing in a piece of text, along with their positions in the text. Searches are performed constructing a tsquery, which is boolean expression combining words with AND(&), OR(|), and NOT(!) operators, then comparing the tsquery against candidate tsvectors with the @@ operator.

select * from articles where to_tsvector('english',articles.body) @@ 'meatball & sub';

will match articles where the the body contains the word meatball and the word sub. If there’s an index on to_tsvector(‘english’,articles.body), this query is a very efficient index lookup.

Single Phrase Search

Now how do we match articles with the phrase “meatball sub”, anywhere in the article’s body? Doing the naive query

select * from articles where body like '%meatball sub%'

will work, but it will be slow because the leading wildcard kills any chance of using an index on that column. What we can do to make this go fast is the following:

select * from articles where to_tsvector('english',articles.body) @@ 'meatball & sub' AND body like '%meatball sub%'

This will use the full text index to find the set of articles where the body has both words, then that (presumably) smaller set of articles can be scanned for the words together.

Multi Phrase Search

It’s simple to extend the above query to match two phrases:

select * from articles where to_tsvector('english',articles.body) @@ 'meatball & sub & ham & sandwich' AND body like '%meatball sub%' AND body like '%ham sandwich%';

That query can be tightened up using postgres’s support for arrays:

select * from articles where to_tsvector('english',articles.body) @@ 'meatball & sub & ham & sandwich' AND body like ALL('{"%meatball sub%","%ham sandwich%"}')

Stepping back a bit, let’s define create a table called “concepts” to allow users of an application to store searches on lists of phrases, and let’s also allow the user to specify that all phrases must match, or just one of them.

CREATE TABLE concepts
(
id serial,
match_all boolean,
phrases character varying[],
query tsquery
)

Now we can specify and execute that previous search this way:

insert into concepts(match_all,phrases,query) VALUES(TRUE,'{"%meatball sub%","%ham sandwich%"}','meatball & sub & ham & sandwich');
select articles.*, join concepts on (concepts.query @@ to_tsvector(body)) AND ((match_all AND body like ALL(phrases)) OR (not match_all AND body like ANY(phrases)));

Where this approach really shines compared with an external text search tools is aggregate queries like counting up matching articles by date.

select count(distinct articles.id), articles.date from articles join concepts on (concepts.query @@ to_tsvector(body)) AND ((match_all AND body like ALL(phrases)) OR (not match_all AND body like ANY(phrases)))
group by articles.date

The logic to combine lists of phrases into the appropriate query based on the desire to match any or all of the phrases is easy to write at the application layer. It’s desirable not to have to include the wildcards into the phrase array, and it’s easy to write a function to do that at runtime.

CREATE OR REPLACE FUNCTION wildcard_wrapper(list varchar[]) RETURNS varchar[] AS $$
DECLARE
return_val varchar[];
BEGIN
for idx in 1 .. array_upper(list, 1)
loop
return_val[idx] := '%' || list[idx] || '%';
end loop;
return return_val;
END;
$$ LANGUAGE plpgsql;

With that function good to go we can make that long query just a little longer:

select count(distinct articles.id), articles.date from articles join concepts on (concepts.query @@ to_tsvector(body)) AND ((match_all AND body like ALL(wildcard_wrapper(phrases))) OR (not match_all AND body like ANY(wildcard_wrapper(phrases))))
group by articles.date

It’s straightforward to collapse most, if not all of the sql on clause into a plpgsql function call without adversely affecting the query plan – it’s important that the tsvector index be involved in the query for adequate performance.

Further Work

This approach works well for lists of phrases. To support boolean logic on phrases, one approach might be to compile the request down to a tsquery as above, along with a regular expression to winnow down the matches to those containing the phrases.

Deferring Index costs for table to table copies in PostgreSQL

I bought a couple of coax-ethernet bridges in the hopes of speeding media transfers to and from my Tivo HD. The devices work great, otolaryngologist rx but it turns out my Tivo itself is the bottleneck – it just doesn’t serve media very fast even over ethernet. I recommend a “Moca”:http://www.mocalliance.org-based network if you’re in need of more speed than wireless will give you, clinic but don’t expect miracles on the Tivo front.

h2 Why go back to wires?

Sure wireless is nice and easy and fast enough for many applications, but you can’t beat the bandwidth of a wire for guaranteed bandwidth. I live in a densely populated area in which I can see about 40 wireless networks, and about a third of those overlap my wireless band to one degree or another. I get just a fraction of the theoretical 54mbps of a g-based wifi network. Compare that to 100 mbps point to point for coax (actually around 240mbps total band width if you’ve got a mesh network set up).

h2 How to

First you’ve got to get yourself a couple of coax bridges. The problem here is that no one sells them at retail right now. Fortunately Verizon’s FIOS service made heavy use of the Motorola NIM-100 bridge but is now phasing them out, so you can get them cheap on ebay. I got a pair for $75, shipped.

Each bridge has an ethernet port, and two coax ports, one labeled “in”, the other labeled “out”. If you have cable internet you’ll likely put one of these next to your cable modem. In that case, connect a wire from the wall to the coax in port, and another from the out port to the cable modem. An ethernet wire to your router, and now you’ve got an ethernet network running over your coaxial cable wires.

This should work out of the box if your bridges came reset to their factory configuration. Unfortunately that means you can’t administer them and they’re using a default encryption key (traffic over the coax is encrypted because it probably leaks a bit out of your house)

I bought a couple of coax-ethernet bridges in the hopes of speeding media transfers to and from my Tivo HD. The devices work great, more info but it turns out my Tivo itself is the bottleneck – it just doesn’t serve media very fast even over ethernet. I recommend a “Moca”:http://www.mocalliance.org-based network if you’re in need of more speed than wireless will give you, more about but don’t expect miracles on the Tivo front.

h2 Why go back to wires?

Sure wireless is nice and easy and fast enough for many applications, try but you can’t beat the bandwidth of a wire for guaranteed bandwidth. I live in a densely populated area in which I can see about 40 wireless networks, and about a third of those overlap my wireless band to one degree or another. I get just a fraction of the theoretical 54mbps of a g-based wifi network. Compare that to 100 mbps point to point for coax (actually around 240mbps total band width if you’ve got a mesh network set up).

h2 Taking the plunge

First you’ve got to get yourself a couple of coax bridges. The problem here is that no one sells them at retail right now. Fortunately Verizon’s FIOS service made heavy use of the Motorola NIM-100 bridge but is now phasing them out, so you can get them cheap on ebay. I got a pair for $75, shipped.

Each bridge has an ethernet port, and two coax ports, one labeled “in”, the other labeled “out”. If you have cable internet you’ll likely put one of these next to your cable modem. In that case, connect a wire from the wall to the coax in port, and another from the out port to the cable modem. An ethernet wire to your router, and now you’ve got an ethernet network running over your coaxial cable wires.

This should work out of the box if your bridges came reset to their factory configuration. Unfortunately that means you can’t administer them and they’re using a default encryption key (traffic over the coax is encrypted because it probably leaks a bit out of your house)

h2 Taking control

I bought a couple of coax-ethernet bridges in the hopes of speeding media transfers to and from my Tivo HD. The devices work great, page but it turns out my Tivo itself is the bottleneck – it just doesn’t serve media very fast even over ethernet. I recommend a “Moca”:http://www.mocalliance.org-based network if you’re in need of more speed than wireless will give you, but don’t expect miracles on the Tivo front.

Why go back to wires?

Sure wireless is nice and easy and fast enough for many applications, but you can’t beat the bandwidth of a wire for guaranteed bandwidth. I live in a densely populated area in which I can see about 40 wireless networks, and about a third of those overlap my wireless band to one degree or another. I get just a fraction of the theoretical 54mbps of a g-based wifi network. Compare that to 100 mbps point to point for coax (actually around 240mbps total band width if you’ve got a mesh network set up).

Taking the plunge

First you’ve got to get yourself a couple of coax bridges. The problem here is that no one sells them at retail right now. Fortunately Verizon’s FIOS service made heavy use of the Motorola NIM-100 bridge but is now phasing them out, so you can get them cheap on ebay. I got a pair for $75, shipped.

Each bridge has an ethernet port, and two coax ports, one labeled “in”, the other labeled “out”. If you have cable internet you’ll likely put one of these next to your cable modem. In that case, connect a wire from the wall to the coax in port, and another from the out port to the cable modem. An ethernet wire to your router, and now you’ve got an ethernet network running over your coaxial cable wires. Plug another one in somewhere else in your house, wall to the in port, and ethernet to some device and you’re in business. I got north of 80mbps between two laptops over the coax bridge.

This should work out of the box if your bridges came reset to their factory configuration. Unfortunately that means you can’t administer them and they’re using a default encryption key (traffic over the coax is encrypted because it probably leaks a bit out of your house)

Taking control

I’d recommend spending a bit of time to make your new bridges configurable- they have web interfaces, its just a matter of getting to them that’s tricky. I pieced together this information from several sources on the web.
The first problem is getting into the web interface. The default settings are for the bridge to auto assign itself an IP address in the range 169.254.1.x , and it won’t accept admin connections from devices that aren’t on the same ip range so here’s what you do:

  1. Take a computer and set your ethernet interface to have a static IP address of 169.254.1.100
  2. Connect the computer directly to the bridge over ethernet
  3. Goto http://169.254.1.1 . If that doesn’t work, increment the last digit until it does
  4. When you see the web interface, the default password is “entropic” – they’re apparently the only people who make the chips for these devices

Once you’re in the configuration works much like any other network device. You should definitely set a new password under “coax security” – you’ll have to repeat this for all your devices. Also, so you d
I bought a couple of coax-ethernet bridges in the hopes of speeding media transfers to and from my Tivo HD. The devices work great, syringe but it turns out my Tivo itself is the bottleneck – it just doesn’t serve media very fast even over ethernet. I recommend a “Moca”:http://www.mocalliance.org-based network if you’re in need of more speed than wireless will give you, but don’t expect miracles on the Tivo front.

Why go back to wires?

Sure wireless is nice and easy and fast enough for many applications, but you can’t beat the bandwidth of a wire for guaranteed bandwidth. I live in a densely populated area in which I can see about 40 wireless networks, and about a third of those overlap my wireless band to one degree or another. I get just a fraction of the theoretical 54mbps of a g-based wifi network. Compare that to 100 mbps point to point for coax (actually around 240mbps total band width if you’ve got a mesh network set up).

Taking the plunge

First you’ve got to get yourself a couple of coax bridges. The problem here is that no one sells them at retail right now. Fortunately Verizon’s FIOS service made heavy use of the Motorola NIM-100 bridge but is now phasing them out, so you can get them cheap on ebay. I got a pair for $75, shipped.

Each bridge has an ethernet port, and two coax ports, one labeled “in”, the other labeled “out”. If you have cable internet you’ll likely put one of these next to your cable modem. In that case, connect a wire from the wall to the coax in port, and another from the out port to the cable modem. An ethernet wire to your router, and now you’ve got an ethernet network running over your coaxial cable wires. Plug another one in somewhere else in your house, wall to the in port, and ethernet to some device and you’re in business. I got north of 80mbps between two laptops over the coax bridge.

This should work out of the box if your bridges came reset to their factory configuration. Unfortunately that means you can’t administer them and they’re using a default encryption key (traffic over the coax is encrypted because it probably leaks a bit out of your house)

Taking control

I’d recommend spending a bit of time to make your new bridges configurable- they have web interfaces, its just a matter of getting to them that’s tricky. I pieced together this information from several sources on the web.
The first problem is getting into the web interface. The default settings are for the bridge to auto assign itself an IP address in the range 169.254.1.x , and it won’t accept admin connections from devices that aren’t on the same ip range so here’s what you do:

  1. Take a computer and set your ethernet interface to have a static IP address of 169.254.1.100
  2. Connect the computer directly to the bridge over ethernet
  3. Goto http://169.254.1.1 . If that doesn’t work, increment the last digit until it does
  4. When you see the web interface, the default password is “entropic” – they’re apparently the only people who make the chips for these devices

Once you’re in the configuration works much like any other network device. You should definitely set a new password under “coax security” – you’ll have to repeat this for all your devices. Also, I’d recommend setting the device to use DHCP or a fixed IP in your usual IP range if you’d like to change anything in the future.

I bought a couple of coax-ethernet bridges in the hopes of speeding media transfers to and from my Tivo HD. The devices work great, sale but it turns out my Tivo itself is the bottleneck – it just doesn’t serve media very fast even over ethernet. I recommend a “Moca”:http://www.mocalliance.org-based network if you’re in need of more speed than wireless will give you, price but don’t expect miracles on the Tivo front.

Why go back to wires?

Sure wireless is nice and easy and fast enough for many applications, but you can’t beat the bandwidth of a wire for guaranteed bandwidth. I live in a densely populated area in which I can see about 40 wireless networks, and about a third of those overlap my wireless band to one degree or another. I get just a fraction of the theoretical 54mbps of a g-based wifi network. Compare that to 100 mbps point to point for coax (actually around 240mbps total band width if you’ve got a mesh network set up).

Taking the plunge

First you’ve got to get yourself a couple of coax bridges. The problem here is that no one sells them at retail right now. Fortunately Verizon’s FIOS service made heavy use of the Motorola NIM-100 bridge but is now phasing them out, so you can get them cheap on ebay. I got a pair for $75, shipped.

Each bridge has an ethernet port, and two coax ports, one labeled “in”, the other labeled “out”. If you have cable internet you’ll likely put one of these next to your cable modem. In that case, connect a wire from the wall to the coax in port, and another from the out port to the cable modem. An ethernet wire to your router, and now you’ve got an ethernet network running over your coaxial cable wires. Plug another one in somewhere else in your house, wall to the in port, and ethernet to some device and you’re in business. I got north of 80mbps between two laptops over the coax bridge.

This should work out of the box if your bridges came reset to their factory configuration. Unfortunately that means you can’t administer them and they’re using a default encryption key (traffic over the coax is encrypted because it probably leaks a bit out of your house)

Taking control

I’d recommend spending a bit of time to make your new bridges configurable- they have web interfaces, its just a matter of getting to them that’s tricky. I pieced together this information from several sources on the web.
The first problem is getting into the web interface. The default settings are for the bridge to auto assign itself an IP address in the range 169.254.1.x , and it won’t accept admin connections from devices that aren’t on the same ip range so here’s what you do:

  1. Take a computer and set your ethernet interface to have a static IP address of 169.254.1.100
  2. Connect the computer directly to the bridge over ethernet
  3. Goto http://169.254.1.1 . If that doesn’t work, increment the last digit until it does
  4. When you see the web interface, the default password is “entropic” – they’re apparently the only people who make the chips for these devices

Once you’re in the configuration works much like any other network device. You should definitely set a new password under “coax security” – you’ll have to repeat this for all your devices. Also, I’d recommend setting the device to use DHCP or a fixed IP in your usual IP range if you’d like to change anything in the future.

I bought a couple of coax-ethernet bridges in the hopes of speeding media transfers to and from my Tivo HD. The devices work great, shop but it turns out my Tivo itself is the bottleneck – it just doesn’t serve media very fast even over ethernet. I recommend a “Moca”:http://www.mocalliance.org based ethernet over coax network if you’re in need of more speed than wireless will give you, ask but don’t expect miracles on the Tivo front.

Why go back to wires?

Sure wireless is nice and easy and fast enough for many applications, but you can’t beat the bandwidth of a wire for guaranteed bandwidth. I live in a densely populated area in which I can see about 40 wireless networks, and about a third of those overlap my wireless band to one degree or another. I get just a fraction of the theoretical 54mbps of a g-based wifi network. Compare that to 100 mbps point to point for coax (actually around 240mbps total band width if you’ve got a mesh network set up).

Taking the plunge

First you’ve got to get yourself a couple of coax bridges. The problem here is that no one sells them at retail right now. Fortunately Verizon’s FIOS service made heavy use of the Motorola NIM-100 bridge but is now phasing them out, so you can get them cheap on ebay. I got a pair for $75, shipped.

Each bridge has an ethernet port, and two coax ports, one labeled “in”, the other labeled “out”. If you have cable internet you’ll likely put one of these next to your cable modem. In that case, connect a wire from the wall to the coax in port, and another from the out port to the cable modem. An ethernet wire to your router, and now you’ve got an ethernet network running over your coaxial cable wires. Plug another one in somewhere else in your house, wall to the in port, and ethernet to some device and you’re in business. I got north of 80mbps between two laptops over the coax bridge.

This should work out of the box if your bridges came reset to their factory configuration. Unfortunately that means you can’t administer them and they’re using a default encryption key (traffic over the coax is encrypted because it probably leaks a bit out of your house)

Taking control

I’d recommend spending a bit of time to make your new bridges configurable- they have web interfaces, its just a matter of getting to them that’s tricky. I pieced together this information from several sources on the web.
The first problem is getting into the web interface. The default settings are for the bridge to auto assign itself an IP address in the range 169.254.1.x , and it won’t accept admin connections from devices that aren’t on the same ip range so here’s what you do:

  1. Take a computer and set your ethernet interface to have a static IP address of 169.254.1.100
  2. Connect the computer directly to the bridge over ethernet
  3. Goto http://169.254.1.1 . If that doesn’t work, increment the last digit until it does
  4. When you see the web interface, the default password is “entropic” – they’re apparently the only people who make the chips for these devices

Once you’re in the configuration works much like any other network device. You should definitely set a new password under “coax security” – you’ll have to repeat this for all your devices. Also, I’d recommend setting the device to use DHCP or a fixed IP in your usual IP range if you’d like to change anything in the future.

I bought a couple of coax-ethernet bridges in the hopes of speeding media transfers to and from my Tivo HD. The devices work great, purchase but it turns out my Tivo itself is the bottleneck – it just doesn’t serve media very fast even over ethernet. I recommend a “Moca”:http://www.mocalliance.org based ethernet over coax network if you’re in need of more speed than wireless will give you, syringe but don’t expect miracles on the Tivo front.

Why go back to wires?

Sure wireless is nice and easy and fast enough for many applications, phimosis but you can’t beat the bandwidth of a wire for guaranteed bandwidth. I live in a densely populated area in which I can see about 40 wireless networks, and about a third of those overlap my wireless band to one degree or another. I get just a fraction of the theoretical 54mbps of a g-based wifi network. Compare that to 100 mbps point to point for coax (actually around 240mbps total band width if you’ve got a mesh network set up).

Taking the plunge

First you’ve got to get yourself a couple of coax bridges. The problem here is that no one sells them at retail right now. Fortunately Verizon’s FIOS service made heavy use of the Motorola NIM-100 bridge but is now phasing them out, so you can get them cheap on ebay. I got a pair for $75, shipped.

Each bridge has an ethernet port, and two coax ports, one labeled “in”, the other labeled “out”. If you have cable internet you’ll likely put one of these next to your cable modem. In that case, connect a wire from the wall to the coax in port, and another from the out port to the cable modem. An ethernet wire to your router, and now you’ve got an ethernet network running over your coaxial cable wires. Plug another one in somewhere else in your house, wall to the in port, and ethernet to some device and you’re in business. I got north of 80mbps between two laptops over the coax bridge.

This should work out of the box if your bridges came reset to their factory configuration. Unfortunately that means you can’t administer them and they’re using a default encryption key (traffic over the coax is encrypted because it probably leaks a bit out of your house)

Taking control

I’d recommend spending a bit of time to make your new bridges configurable- they have web interfaces, its just a matter of getting to them that’s tricky. I pieced together this information from several sources on the web.
The first problem is getting into the web interface. The default settings are for the bridge to auto assign itself an IP address in the range 169.254.1.x , and it won’t accept admin connections from devices that aren’t on the same ip range so here’s what you do:

  1. Take a computer and set your ethernet interface to have a static IP address of 169.254.1.100
  2. Connect the computer directly to the bridge over ethernet
  3. Goto http://169.254.1.1 . If that doesn’t work, increment the last digit until it does
  4. When you see the web interface, the default password is “entropic” – they’re apparently the only people who make the chips for these devices

Once you’re in the configuration works much like any other network device. You should definitely set a new password under “coax security” – you’ll have to repeat this for all your devices. Also, I’d recommend setting the device to use DHCP or a fixed IP in your usual IP range if you’d like to change anything in the future.

When bulk copying data to a table, stomach it is much faster if the destination table is index and constraint free, because it is cheaper to build an index once than maintain it over many inserts. For postgres, the pg_restore and SQL COPY commands can do this, but they both require that data be copied from the filesystem rather than directly from another table.

For table to table copying (and transformations) the situation isn’t as straight-forward. Recently I was working on a problem where we needed to perform some poor-man’s ETL, copying and transforming data between tables in different schemas. Since some of the destination tables were heavily indexed(including a full text index) the task took quite a while. In talking with a colleague about the problem, we came up with the idea of dropping the indexes and constraints prior to the data load, and restoring them afterwards.

First stop: how to get the DDL for indices on a table in postgres? Poking around the postgres catalogs, I managed to find a function pg_get_indexdef that would return the DDL for an index. Combining that with a query I found in a forum somewhere and altered, I came up with this query to get the names and DDL of all the indices on a table. (this one excludes the primary key index)

With that and the query to do the same for constraints its straightforward to build a helper function that will get the DDL for all indices and constraints, drop them, yield to evaluate a block and then restore the indices and constraints. The method is below:

Use of the function would look like the snippet below. This solution would also allow for arbitrarily complex transformations in Ruby as well as pure SQL.

For my task loading and transforming data into about 20 tables, doing this reduced the execution time by two-thirds. Of course, your mileage may vary depending how heavily indexed your destination tables are.

Here’s the whole module: