Wednesday, June 3, 2009

Exploring the Fibonacci Numbers

Recently I decided to teach myself Haskell by using it to solve Project Euler problems. This kills two birds with one stone, since I've been wanting to tackle both. Also, it sounded like a lot of fun.

Project Euler problem #2 is simple enough:
Each new term in the Fibonacci sequence is generated by adding the previous two terms. By starting with 1 and 2, the first 10 terms will be:

1, 2, 3, 5, 8, 13, 21, 34, 55, 89, ...

Find the sum of all the even-valued terms in the sequence which do not exceed four million.
But, since I'm doing this for personal enrichment I decided to investigate in more detail. I did not notice this before solving it the first time (asking a stupid question in the process), but it turns out that every third fibonacci number is even. Furthermore, there is a surprising relationship between every third fibonacci number: f(n) = 4*f(n-3) + f(n-6)[1]. So it is possible to solve the problem efficiently by enumerating just the even fibonacci numbers. In this post I'll use g(n) to denote the nth even fibonacci number. The sequence can be implemented in Haskell, using list comprehension syntax, like so:

-- g(n) = 4*g(n-1) + g(n-2)
g :: [Integer]
g = 2 : 8 : [ a + 4*b | (a,b) <- zip g (tail fibs2)]

But can we do better? I remembered one of Raganwald's homoiconic posts about computing fibonacci numbers efficiently using a matrix multiplication technique (pardon the ascii-art notation):

m^n, where m = (1 1) , is equivalent to: ( f(n) f(n-1) )
(1 0) ( f(n-1) f(n-2) )

So the nth fibonacci number is the upper-left element in the matrix obtained by multiplying m by itself n times[2].

Also interesting is that a source article of the post at homoiconic gives an efficient algorithm for calculating the sum of fibonacci numbers using the matrix representation of the sequence. Now we're getting somewhere! It should be possible to modify the matrix approach so that it generates only the even fibonacci numbers, using the formula described above[3]. And if we can do that, we can calculate their sum efficiently with the provided algorithm.

You can do exactly that using these two matrices:

b = (8 2) m = (4 1)
(2 0) (1 0)
where b is the base case and m is the multiplier. Notice that if you multiply a generic fibonacci matrix (where a,b,c are consecutive even fibonaccis) by m, you get:

(a b) * (4 1) = (4a+b, a)
(b c) (1 0) (4b+c, b)
Which generates the matrix containing the next consecutive even fibonacci number (recall that the nth even fib g(n) is equal to 4*g(n-1) + g(n-2), which is 4a+b if a and b are the two previous even fibs). Using the matrix b to bootstrap the multiplication (because it contains 2 and 8, the first two even fibs), it follows that b*m should produce the 3rd even fibonacci number, b*m*m should produce the fourth, b * m^3 the fifth, and in general:

g(1) = 2
g(n) = b * m^(n-2) [4]
So, now we have an equation for using matrix multiplication to compute even fibonacci numbers. How can we use it to efficiently compute the sum of the first n numbers in that sequence? Let's use a concrete example, n=7. We want to calculate

g(1) + g(2) + ... + g(7) = 2 + b + b*m + b*m^2 + ... + b*m^5
factoring out the common b term, we have

2 + b(m^0 + m^1 + m^2 + m^3 + m^4 + m^5)
where m^0 evaluates to the identity matrix. For convenience, let's define a function sum(n,m) that computes the sum m^0 + ... + m^n, so the above equation can be written 2 + b*sum(5,m) We can see that the bulk of the computation for large values of n will be in calculating each of the matrices m^2, m^3,...,m^n. But notice the following equivalence:

m^0 + m^1 + ... + m^5 = (m^0 + m^1 + m^2) + m^3(m^0 + m^1 + m^2)
by expanding the equation in this manner, we can calculate sum(5,m) with considerably less work. Notice the recursive structure of the equation: sum(5,m) = sum(2,m) + m^3 * sum(2,m). In general pseudocode:

sum(n,m) =
if n == 0 then return i //the identity matrix
if odd(n) then let s = sum(floor(n/2), m)
return s + m^ceil(n/2) * s
if even(n) then let s = sum(n/2, m)
return s + m^(n/2) * s + m^n

the 'even' case is slightly more complicated because it needs to take into account that the sequence m^0 + ... m^n is not evenly divisible into two pieces. The algorithm can be further optimized by having our sum function return m^n as well as the current running sum. This unfortunately obfuscates the algorithm with extra bookkeeping, but it ensures that we aren't wasting any work from one call to the next, and sum(n,m) must calculate m^n or m^n/2 anyway, so it is not much overhead. The 'odd' step from the above algorithm would then look something like this:

let s, m_exp = sum(floor(n/2), m) //m_exp is m^floor(n/2)
return s + (m_exp * m) * s, m_exp*m_exp*n //return both the sum and m^n
One final optimization. Raganwald noted in his original post that there is redundancy in the matrix representation, and we can easily re-write our matrices as tuples of 3 elements:

b = (8, 2, 0) m = (4, 1, 0)
Multiplication on these tuples is essentially the same as matrix multiplication (details at homoiconic). Putting it all together, we have the final, executable haskell:

The naive implementation is faster for small n, but once you get above n=5-10k this optimized version overtakes it and never looks back. The full code, including multiple solutions to the project Euler question and benchmarking of the naive implementation against this one, is available here.


[1] This is easy to verify yourself. Start with 4*f(n-3) + f(n-6) and keep simplifying until you get f(n)
[2] For convenience, define the first two fibonacci numbers to be 1,2. Take some time to convince yourself of this, if you like. The computation can be done in O(log n) multiplications (instead of n multiplications) by multiplying in a clever order. See the source articles for details.
[3] g(n) = 4*g(n-1) + g(n-2)
[4] I'm abusing notation slightly here. Technically this is the 2x2 matrix containing g(n), not g(n) itself. Also, note that when n=2, m^0 evaluates to the identity matrix. b * m^0 = b * i = b, so no special case is needed for 8.

Wednesday, December 3, 2008

Data as code: Storing Configuration in C# & Asp.net

I've been thinking about the best way to store configuration information in c# since I started this blog more than a year ago. I'm a little embarrased that it's taken me so long, but I think I've hit on the best way of doing it (largely inspired by Ayende's JFHCI). Here it is:

Just use c#

That's right! No xml. No web.config. No database. Just use c#.

The benefits are pretty obvious. The reason configuration is a pain is because you have to spend several lines of code extracting information from some foreign (and usually weakly-typed) source, and loading it into c#. You have to worry about checking for null "just in case", etc. Keeping everything in c# saves a lot of code and is radically simpler. This approach is popular in scripting languages like Ruby.

It's less obvious why the apparent drawbacks aren't actually drawbacks. Some typical objections:

  1. But then the configuration is compiled-in. You can't modify it on the fly!
    This is false in Asp.net. Although there is a pre-compilation option, ever since 2.0 the default has been to read the source files and compile them into temporary assemblies on the fly. This would be a valid criticism for a WinForms app. But seriously, who uses WinForms?
  2. But that's just hard-coding!
    Yes, it is! Embrace it.
  3. Storing configuration data in c# is ugly and unintuitive because c# isn't a good data format.
    True, c# is not lisp or json or xml. Initializing a complex data type the traditional way would involve declaring the type, then initializing it imperatively in a constructor. This separation means that the code does not look like data in the way that traditional config files do.

    But with c# 3.0, I think the language is good enough at representing data to overcome this objection. Collection initializers and object initializers are expressive enough that you should never need a constructor to do initialization.
Let's run with this a little bit. Say you need to add a list of email addresses for generating error emails. Just pop open config.cs, and type this:

public static var ErrorEmails = new List<string>() {
    "joe@bob.com",
    "jane@bob.com",
    "julie@bob.com"
}

That's not so bad, is it? Clear, strongly typed, native c#, at least as easy to modify on-the-fly as xml or database configuration. Let's take a more complicated example: Suppose our application allows the user to submit different types of Requests, and each Request has a bunch of metadata associated with it: a code, request group, name, and description. Also suppose we want to look up the Request metadata given a code. We can do something like this:

public static var Codes = new Dictionary<string, CodeData>() {
    { "Code1", new CodeData { Group="...", Name="...", Desc="..." } },
    { "Code2", new CodeData { Group="...", Name="...", Desc="..." } },
    { "Code3", new CodeData { Group="...", Name="...", Desc="..." } },
    { "Code4", new CodeData { Group="...", Name="...", Desc="..." } }
}
That is the sort of information that I probably would have placed in a database before. The benefits of doing it this way should be obvious. Instead of dropping down into SQL to query the data and loading it into c#, I can now start off in c#. I can use LINQ to easily query the data to return anything I need.

Yes, it's unconventional. Do it anyway.

Tuesday, June 3, 2008

Javascript Dom Builder

I hate the Dom. So I decided to fix it.

basic usage



1 domBuilder($get('basic'))
2 .div({id: 'div1', style: 'height:50px;border-style:solid;border-width:4px;padding:1px;'})
3 .div({style: 'float: left;' })
4 .span().text('purple, ').end()
5 .a({href: 'http://www.google.com'}).text('google').end()
6 .end()
7 .div({style: 'float: right;' })
8 .span().text('cat').end()
9 .end();


Adding event handlers



1 domBuilder($get('event'))
2 .div()
3 .span().text('some text').end()
4 .input({id:'evt_test', type: 'text'})
5 .on('blur', function() {alert('blur!!');})
6 .on('focus', function() {alert('focus!')})
7 .end();


using custom (not pre-defined) tags


30 domBuilder($get('custom'), ['select', 'optgroup', 'option'])
31 .select({id: 'select_test'})
32 .optgroup({label: 'swedish cars'})
33 .option({value: 'v'}).text('Volvo').end()
34 .option({value: 's'}).text('Saab').end()
35 .option({value: ''}).text('').end()
36 .end()
37 .optgroup({label: 'german cars'})
38 .option({value: 'm'}).text('Mercedes').end()
39 .option({value: 'a'}).text('Audi').end()
40 .end();

Essentially the DomBuilder function takes a dom element and returns an object that lets you add nodes to it easily. The most common html tags are pre-defined. You can also insert an arbitrary tag with the 'add' method: .add('foo', {id: 'foo'}, although the expected usage is to tell dom-builder about the custom tags you want to use and let it create helper methods for you, as in the third example above. One gotcha of the fluent-interface style is that you may be tempted to leave off the end() calls - proper indentation can make the code look correct when it is not. end() is only optional at the end of the method chain - in other words, final end()s are implicit.

One benefit of the fluent interface style is that it is easy to extend. For example, you could add a css() method, some sort of auto-close method or parameter (analagous to <tag />) so that an explicit end() is not required, etc.

dom-builder is built on top of the YUI library, and is inspired slightly by Ruby's xml-builder. It is not an original idea - it has been done before in similar ways and by smarter people. Still, it's always helpful to work something out for yourself, and from an aesthetic point of view I prefer my API decisions over those solutions anyway. Examples, tests, and source are available here.

Friday, April 18, 2008

Pluggable Type Systems and the future of Programming Languages

Programming languages come and go at an astonishing pace, and few enjoy the spotlight of geek popularity for long. Heck, Java used to be cool. And so did Perl. The pace at which technology moves sometimes makes me wonder if we are actually getting anywhere, or if we are just moving to be moving. There's a sort of elite snobbery among hackers that shuns popularity. Did Java fall out of favor simply because it became too popular?

Although I certainly wouldn't put it past some corners of hackerdom to shun something just because it is popular, I do not think this is what is happening in the mad race for newer and shinier programming languages. After all, Isn't Java better than c++? And aren't Python and Ruby better than Perl? And isn't Javascript... well, never mind about that.

The point is that languages don't fall out of favor among hackers simply because they become popular. They fall out of favor because hackers are very promiscuous about their programming languages and move on to something better when they find it. And since there is a whole lot of creativity in that community, something better comes along pretty regularly.

I don't expect innovation to continue at this pace forever, though. Many of the good ideas making their way into the mainstream today were invented in academia 30 or 60 years ago. Eventually the popular languages are going to converge around a more-or-less standard set of features, and there will be less need for such rapid innovation.

Encouragingly, I think we're catching a glimpse of one area around which programming languages are going to converge in the near future: pluggable type systems (that is, optional type systems which are dynamic in nature and can be removed, modified, or extended). This is happening from both directions: dynamic languages are adding optional static features, and static languages are becoming more dynamic. Consider the signs:
  1. Javascript 2 will have optional typing
  2. Python 3 will have function annotations, which have no semantic meaning on their own but can be interpreted by 3rd-party libraries.
  3. One of the JRuby developers is working on a toy Ruby dialect that adds type annotation to the language. Interestingly, the type annotations are valid Ruby and would just be ignored (or immediately garbage-collected) by a normal Ruby interpreter - giving them a pluggable flavor.
  4. The next version of c# will support dynamic method lookup.
  5. Qi, an interesting lisp-like language, has an optional type system.
  6. Boo, a statically-typed scripting language for the CLR, has a "duck" type which allows method lookups to be done at runtime.
Granted, some of these examples are obscure. But the number which are not is pretty remarkable. It shows that with regard to type systems, we as community of programmers are learning to be pragmatic rather than purist. I think this is a real sign of maturity because it means we are transcending the static/dynamic holy wars. The Java design philosophy of ruthless consistency is no more. We've been down that road, and we don't like what we saw. We know that not everything is an object and that gee, it would be nice to be able to remove checked exceptions if we wanted to. And although I don't know where we will end up, I do know that it will be better than where we are today.

Monday, January 28, 2008

Switching Keyboard Layouts: Colemak

In an effort to preserve my hands, I've switched my keyboard layout from the contorted-and-inexplicable Qwerty to the comfortable, ergonomic upstart Colemak. I spent a day using Dvorak before doing some more research and settling on Colemak. Both layouts require startlingly less hand motion than Qwerty, and greatly reduce awkward finger motions. The best way to get a feel for the magnitude of the difference is to go to this typing-test site and replay one of the high scores. You can toggle between Qwerty, Dvorak, and Colemak, and see visually how much your hands would be moving in that layout. This applet will let you compare layout statistics on a given piece of text.

All in all, I think Colemak is the clear winner over Dvorak. Here they are, for reference:

 

Dvorak:

Colemak:

I was initially skeptical of Colemak because one of its main selling points is that it is much more similar to Qwerty than Dvorak is, and therefore easier to learn. I was afraid that it might have sacrificed ergonomics for similarity to Qwerty. This is not the case. Dvorak is optimized for alternating work between the left and right hand, and using it felt like a delicate left-right dance. Colemak on the other hand is optimized for grouping common letter pairs (digraphs) together so that you can hit them with one smooth hand motion. I find this more comfortable than the focus on alternation in Dvorak. Colemak also remedies two minor annoyances I had with Dvorak: "R" isn't on the home row, and "L" is delegated to the right pinkie, making it harder to reach than it's frequency warrants. Other Colemak benefits:

  • Caps Lock is mapped to backspace. Brilliant!
  • The punctuation keys are in the same place as in Qwerty. I didn't find the top-row Dvorak placement any worse, but I didn't find it any better either, and this is one less thing to relearn.
  • The Z/X/C/V keys don't move, so the most common shortcut keys don't need to be relearned.

Pretty much the only thing I liked better about Dvorak is the convenient placement of the "-/_" key, which as a programmer I use a lot. But nothing is perfect, I guess.

One of the most interesting (to me) aspects of switching was getting an insight into how the body learns to type quickly (I got up to about 80 WPM in Qwerty, although typing that fast is so uncomfortable that I rarely do it). It happens largely by burning common words and letter combinations into your muscles, not by stringing together individual keys. After almost 3 weeks on Colemak, I find that my most common mistakes aren't on particular letters but in particular contexts. I may get "I" right most of the time, but I still invariably type the Qwerty "I" in certain words. Because of the Colemak emphasis on digraphs, I get the feeling that as I burn it into my fingers it will make for more comfortable and smoother typing.

All things considered I'm very glad I made the switch, and I'd recommend it to anyone who makes a living on a keyboard. Your hands have to last you the rest of your life, and Colemak will help them to.

Friday, January 4, 2008

Code Organization

Even though I read Steve McConnell's seminal Code Complete years ago, there is one piece of advice in it that I use every day. In the section Making Code Read from Top to Bottom, he presents a poorly organized code fragment and reorganizes it:

    1 /* C Example of Bad Code That Jumps Around */

    2 InitMarketingData( MarketingData );

    3 InitMSIData( MSIData );

    4 InitAccountingData( AccountingData );

    5 

    6 ComputeQuarterlyMarketingData( MarketingData );

    7 ComputeQuarterlyMSIData( MSIData );

    8 ComputeQuarterlyAccountingData( AccountingData );

    9 

   10 ComputeAnnualMarketingData( MarketingData );

   11 ComputeAnnualMSIData( MSIData );

   12 ComputerAnnualAccountingData( AccountingData );

   13 

   14 PrintMarketingData( MarketingData );

   15 PrintMSIData( MSIData );

   16 PrintAccountingData( AccountingData );

   17 

   18 

   19 /* C Example of Good, Sequential Code That Reads from Top to Bottom */

   20 InitMarketingData( MarketingData );

   21 ComputeQuarterlyMarketingData( MarketingData );

   22 ComputeAnnualMarketingData( MarketingData );

   23 PrintMarketingData( MarketingData );

   24 

   25 InitMSIData( MSIData );

   26 ComputeQuarterlyMSIData( MSIData );

   27 ComputeAnnualMSIData( MSIData );

   28 PrintMSIData( MSIData );

   29 

   30 InitAccountingData( AccountingData );

   31 ComputeQuarterlyAccountingData( AccountingData );

   32 ComputerAnnualAccountingData( AccountingData );

   33 PrintAccountingData( AccountingData );

McConnell explains why the refactored layout is superior:

This code is better in several ways. References to each variable are "localized": they're kept close together. Values are assigned to variables close to where they're used. The number of lines of code in which the variables are "live" is small. And perhaps most important, the code now looks as if it could be broken into separate routines for marketing, MIS, and accounting data. The first code fragment gave no hint that such a decomposition was possible.

The idea is to structure your code so that statements are grouped by the data they operate on, not the operations they perform. I'm often tempted to organize code so that grouped statements "line up" as in the first example. This may be something that I have particular trouble with because a lot of my early coding was done in Pascal, which enforces poor layout in the language syntax. Variables must all be declared in one place, at the start of a method. In class definitions fields must come before properties and methods, when it is clearly better to group related private fields, properties, and getter/setter methods together.

At any rate, this idea can be extended beyond code layout. Just one example: the Rails directory structure is organized by file kind. All the models go in one directory, all the controllers go in another, the tests in another, etc. This is analogous to grouping statements so they "line up", and is exactly backwards. The file groups that belong together logically are those which all relate to the same concept. Folders in Rails ought to be organized by model name (or controller name, for model-less controllers). One folder should hold the model, its controller, tests, and views. This is the important relationship between files: what data they operate on, not what type of file they are.

Because developing in Rails forces you to frequently jump between related files in different directories, every Rails editor or IDE has shortcuts to help you do it. Essentially these editors are plastering the proper logical organization on top of the Rails folder structure, which alleviates the problem. The uniform structure imposed by the the fixed folder layout in Rails is a stroke of genius, but having to use a nonstandard, editor-specific abstraction layer on top of it for actual development limits its awesomeness somewhat.

Monday, December 24, 2007

Encapsulating the Lazy-Load Pattern

In a previous post I suggested a helper method to wrap the Lazy-Load pattern. I'm now convinced that this was the wrong approach to take. For the specific case I mentioned (lazy-loading a property value), the straightforward test for nil is irreducibly simple. Trying to wrap it in a generic procedure introduces more complexity than it alleviates.

For the more general case in which you have a value which you don't want to compute more than once, the correct pattern is to create a generic wrapper class or method which bolts added functionality onto the type in question. This is a common pattern in the .Net framework: some examples are Nullable<T>, Expression<T>, and Future<T> in the next version of Chrome.

So what we want is some sort of Lazy which will wrap the lazy evaluation. There is one example implementation here. You can consult the article for the (mostly trivial) implementation details. But it enables you to write code like this:


    1 int a = 22, b = 20;

    2 var lazy = Lazy.New(() => {

    3     Console.WriteLine("calculating...");

    4     return new { Mul = a*b, Sum = a+b };

    5 });

    6 Console.WriteLine("Mul = {0}, Sum = {1}",

    7 lazy.Value.Mul, lazy.Value.Sum);


The block passed to Lazy.New is evaluated only once, in the first call to lazy.value. The Lazy.New function is a helper which basically exists to trick the compiler into providing type inference for us so that we don't have to specify the generic parameter.

The main downside to this is that Lazy<T> is a custom class. In order for a function to accept one of these lazy objects as a parameter, it needs to be written specifically with that type in mind. Wouldn't it be cleaner if Lazy.New returned an Func<T> delegate instead?

Well, it can. What we basically want to do is memoize a function of no arguments. And for that the solution presented here is much cleaner:


    1 static class FuncLib

    2 {

    3     public static Func<R> Memoize<R>(this Func<R> f)

    4     {

    5         R value = default(R);

    6         bool hasValue = false;

    7         return () =>

    8         {

    9             if (!hasValue)

   10             {

   11                 hasValue = true;

   12                 value = f();

   13             }

   14             return value;

   15         };

   16     }

   17 }


Let's walk through this. Memoize<T> takes a function with no parameters, and returns a function of the same signature. Here's where it gets sneaky: whereas Lazy<T> defined a new class to hold the cached function value, Memoize<T> takes advantage of the fact that closures have access to their containing scope. The return function carries around a pointer to the local variables value and hasValue, which allows it to remember if the result has been calculated or not.

As an added bonus Memoize<T> is an extension method, which allows us to use it in two ways: either as a helper method similar to Lazy.New, or as a method on a Func<T> object. We can then create lazily-evaluated functions like so:


   32 //using Memoize as a helper method, with type inference

   33 var lazy = FuncLib.Memoize( () =>

   34 {

   35     Console.WriteLine( "calculating..." );

   36     return 42;

   37 } );

   38 

   39 //using Memoize as an extension method,

   40 //  but without type inference

   41 Func<int> lazy2 = ( () =>

   42 {

   43     Console.WriteLine( "calculating..." );

   44     return 42;

   45 } );

   46 lazy2 = lazy2.Memoize();

   47 

   48 Console.WriteLine( "starting..." );

   49 Console.WriteLine( "result = {0}", lazy() );

   50 Console.WriteLine( "result (again) = {0}", lazy() );

   51 Console.Read();


This can't be beaten for elegance, in my opinion.