TRETC Panel: Innovation and The Energy Crisis

The most captivating and depressing session from TRETC was easily “Innovation and the Energy Crisis”. The panelists (well three of them: Nathan Lewis, Joseph Romm and Robert C. Armstrong) painted the most dire picture of global climate change that I have heard yet. They argued that within twenty years there will be massive redirection of capital into mitigating the effects of climate change, which will have such priority that relative luxuries like the space program will go by the wayside (clarification on this here).

The central issue is that in order to minimize climate change but still meet growing energy demands, we have to double today’s energy infrastructure but without any increase in carbon emissions. The problem with this is there are no magic bullets – society has to start using every technology at its disposal from conservation to generation, and the sooner the better so that we can figure out if some technologies (like carbon sequestration) will even work. Discussions on energy policy based on cost alone will never help to solve this problem, risk assessment must become part of energy decisions.

Some interesting tidbits from the discussion:

  • Wind power can likely never account for more than 10% of the worlds energy output.
  • Almost all the significant hydro-power resources are already tapped.
  • There is more energy worldwide in natural gas reserves than uranium. If the world’s ~11 TW energy was to be generated entirely with uranium, it would only last 10 years. This means that breeder reactors using plutonium have to be part of the arsenal, which means dealing with their proliferation issues.
  • The amount of geothermal energy available is on average just 55mW per square meter – so large scale geothermal power may never be possible (but home and business heat pumps are still an effective way to assist in cooling and heating)
  • China’s geology prevents any underground carbon sequestration except for a small portion of the north west. (They’re also apparently asking for the right to “catch up” with the developed nations in terms of cummulative CO2 emmissions before having to participate in any reduction treaties)

The short time frame to turn this around immediately made me think about patents and how they could help or hinder the process – as companies invent better energy technologies, can governments incent them to turn them over to the public domain or make them available for inexpensive licensing without taking away the financial incentive to invent in the first place?

I put the question to the panel leader Robert Armstrong after the talk. He drew a parallel to the mobilization of the US economy during world war 2, where it took just 9 months to switch from cars to bombers and any inventions were quickly disseminated among all producers. (I think another parallel is the way the US government guarantees certain sized orders of vaccines in order to foster their development today)

I wish the audio of the panel was online. [UPDATE! sometimes wishes come true quickly!. The MP3 of this talk is here.] There is this brief article. Here’s a description of some new research that appeared just before the conference and drove some of the discussion.

Bad things happen when the buyer and the user aren’t the same person

I was at Technology Review’s Emerging Technology Conference (TRETC) today which was great and about which I have much to say. Before I delve through my notes to put up some posts, something occurred to me based on several things I heard today.

First I overheard someone say to a colleague who had just gotten a new motorola phone that Motorolas have crappy interfaces, which I wholeheartedly agree with.

Second In the panel discussion “Online Application War” someone pointed out that the reason so many enterprise applications (interestingly he singled out all of Oracle’s back office applications) have crappy user experiences is that the buyer probably never has to use the system. This likely means that these apps are being bought on the basis of feature checklists. (The extension of this is that the beauty of web apps is that people can make an end run around their IT department to use apps they want to without having to get permission).

So this brings me back to my hypothesis about why people buy so many Motorola phones: the “person” who is buying the phone has never used the phone. They’re making a decision based on features (camera: check, games:check) and the look of the phone. Most buyers never get an opportunity to actually try the phone out, because most of the display models are empty cases with stickers for screens. Which is why Motorola must justify spending so much more on industrial design than UI design. Imagine if people were forced to buy cars this way?

Once the user gets the phone home and uses it, it is either too late or too much hassle to return it (or they just don’t expect enough). Then they get used to the warts, two years go by and the cycle starts anew. Should I take the pink phone, or pay extra for the same phone in blue?

[Update] – I found this post (and related comment thread) on the lackluster UIs in all cell phones over at 37signals.

Almost time for back to school…

Matt’s post about his first week of grad school reminds me of two things: 1) Class starts for me next week 2) I’m jealous that his program offers an Information Visualization class. That’s something I’m interested in, health system but despite all my lobbying and rounding up quite a few grad students who are also intrested, I can’t get Tufts to offer such a class. I’ll have to look into a directed study or transferring credit from some other school in the area.

I’m bumming in general about my grad program at Tufts because all the interesting classes are offered during the day. They don’t really have a night program, and they try to offer enough classes at or after 4, but this semester the offerings are pretty grim. I’d like to take Computational Bio or Computational Geometry but they’re in the afternoon, twice a week. I could probably get work to let me do that, but the I feel stuck there because a new job isn’t likely to be down with that.

Also a bummer is that it will take forever to finish. I contemplated quitting my job and going to school full time for one semester to knock off a bunch of classes at once, but thought it would be dumb to do that and not actually be done at the end of it. Maybe next fall. If not I need to knuckle down and take more than one class at a time.

JPod

I recently finished reading JPod by Douglas Coupland. It was a pretty strange book. The only other Coupland book I’ve read is Microserfs, but that was probably most of a decade ago so I can’t remember if that was nearly as weird. While the plot is ok, the ending is pretty weak.

The presentation is interesting (strange): there are pages containing the first million digits of Pi with one mistake to find, pages full of numbers where one zero has been substituted with an O, random words in huge fonts on pages that serve to divide it into chapters of sorts. These artsy things waste so much paper that the book is an astonishingly fast read given its heft.

The strangest part of the book is the level of narcissm on Coupland’s part. (Perhaps since this is on my blog I can’t really talk). At the beginning Coupland appears to grind an axe by having his characters declare that Melrose Place was a ripoff of his book Generation X and that the ripoff was so blatant as to be “actionable”. After that the characters refer to him occasionally, but that’s just leading up to Coupland appearing as character at least nine different times. The ending even revolves around him. I’ve seen authors give themselves cameos in books before (Cussler in particular I remember happening to be yachting around when his characters needed help), but this was pretty over the top.

Working on an anti-pattern

The project I’m working on right now is a collection of anti-patterns and just plain terrible code. The upshot of this is that its really hard to make it worse, and often times I can walk away feeling good about making a huge difference in making even small changes. My first project appeared well designed, and since I was new then, I felt very constrained in how much I could change. Not anymore, its like the wild-west in this code base, and any design is better than no design. Its definately been a good way to bust out of my years-long productivity slump.

The project was was started several years ago by an offshore contracting company (it seems like they got paid by the line) and then picked up by an in-house but still offshore team to continue to maintain and build. I don’t want to paint all offshore software-industry workers with the same brush, but in this case the code all appears to be written by people who just know how to program in Java. Barely. They just don’t think like computer scientists. For some reason no one seems to think a single class having five methods to do the same thing is bad. Or methods that are hundreds of lines long. Or building strings by concatenation, multiple times in loops that run thousands of times. Or checking for duplicates when copying the keys of a Map into a list. Converting Longs to Integers via a string object.

I’ve speculated that the current team must have come from a background of sustaining enginering (where the idea is to fix bugs in the least intrusive way possible) and that’s why they blithely copy the bad code around them. Either that or for some reason they don’t feel empowered to make changes.

The last few days have been especially great. I’ve been working on performance problems, and the code is so badly written that there are huge chunks of fat to chop out. Two methods I’ve found are O(n^2) so the profiler practically slaps you in the face, but they’ve somehow sat there for years. Replacing them with real implementations has reduced the time to run this code by 90%. Hours to minutes for large sets. That’s fun to talk about in meetings.

Where are the headless macs?

Todays release of the whopping 24 inch imac renews the question: why isn’t there a midrange headless mac, something between the mini and the pro? Like the Imac hardware, but without the built-in monitor and capable of driving two screens? It just seems like a shame to have a nice 24 inch LCD that will probably be useful for years after the computer attached to its behind is obsolete. I guess it’ll always be able to play movies.

Ode to unit tests and course-grained objects

I recently became a believer in the value of unit testing. I sort of understood the benefits of the test-first approach academically, but always felt that what I was writing would just be too difficult to unit test (UI stuff etc). Before this, ed I think the last thing I unit tested in a formal, framework using way was this directed acyclic graph thing I knocked out for workplace. This project dropped the prospect of taking in a pile of data and at the end spitting out some numbers, which would be easy to test, but that’s not what got me there.

The former code that did similar functionality (ok, really the entire project) is a tangled mess of collections of fine-grained value objects that mostly map to rows in tables. So a given section of code may be juggling three arrays and repeatedly iterating (where did i put that one?) to find objects that relate to objects in each of the other arrays. This is obviously inefficient and very hard to read (especially when the objects passed around have fields that are overloaded, their values depending on the computation phase as they are passed around in a disgustingly procedural manner), so I proposed taming the colections with more intelligent, course-grained objects that would serve as indexes of sorts. Now, instead of looking through the array (or several) to find the sales for item X at store Y the code could just ask an object “get me the data for item X at store Y”. Even better if there’s some kind of normalization or reduction of the data, it can be done as the object is filled with the value objects. So this presents a perfect unit testing specimen, because I can fake up some value objects, toss them in and then test what comes out, with and without manipulation.

None of that is rocket science, but it does allow two things: significant complexity can be hidden and thoroughly tested without hooking it up to an app server (fixing something in a JUnit harness is much faster than redeploying to the app server), and secondly that the code to actually do the final calculations can be astonishingly easy to write, read and integrate into the application. Indeed, in this case, the code slipped into place with only one bug.

From the outside, it seems like a lot of extra work to mock out supporting objects and write tests, but having done it it I feel like it saved me several days of total development time. The end product is better too. I’m not sure I can get behind actually writing tests first, but writing tests (in parallel in my case) definately helps to crystalize my thinking about what my objects actually need to do.

Shifty serving sizes

Shifty serving sizes

General Mills (secretly? sneakily?) altered the serving size of the multi bran chex I eat for breakfast. Instead of 190 calories per serving now its 160 calories. It just seems disengenuous that any food should be made to appear better for you by altering its serving size. Then again, does anyone actually eat just one serving of cereal? More mysterious is that the requisite 1/2 cup of skim milk factored in didn’t change. Do people suddenly like a higher milk to cereal ratio?

I see the benefit (to not eating too much, not so much the environment) of smaller individually packed serving sizes because there’s a certain mental barrier to opening another bag, but not the benefit of arbitrarily reducing the serving size.