I've fiddled with my blog template because I decided I wanted more horizontal viewing space, given that it was using less than a third of my 1920 horizontal pixels. If it feels too spread out for you, I added a drag-and-drop handle over to the left to let you resize the main content column. The javascript is pretty primitive. If it breaks, drop me a comment.
>
>
>
>

Saturday, February 1, 2014

CentOS 6 and SSH public key auth

I have to get this one out there because it beats anything I've seen anywhere. I had set up a quick CentOS 6.5 minimal x86_64 VM in VirtualBox to play with, and I've spent an hour plus trying to get SSH pubkey auth working with it. I checked all the usual things: directory permissions, sshd config settings, ssh and sshd debug output, etc. Then I checked it all again. *head scratch* What's going on here? I stumbled across a forum post somewhere that led to my running the command "restorecon -r <user's home dir>", and magically, it works fine now. ???? What? I have no idea what it does, and I hate when that happens. It's obviously something to do with SELinux, which I also know nothing about. The crazy thing is I'm pretty sure I did nothing that should screw with anything along these lines, implying that the CentOS 6.5 image is broken in some way?

Saturday, February 23, 2013

Required Reading

Toward the end of last year, I sent a series of "reading assignments" out to my dev team in an effort to get them thinking along the lines of how to improve themselves as developers. Some other devs at my company (Rackspace Hosting) caught wind of it and requested that I widen my distribution of them. Since many found the material enlightening, I'm going to post the whole series here for posterity, along with my original "post date" and comments.

Since I took the trouble to do this in the first place, it should be obvious that I think each of these items has something important to say about the profession of software development. If you're looking to be challenged, read on. If you think you're fine right where you are, then maybe this isn't for you.

Final note: Remember that this was spread out across several weeks, so while each piece of reading is relatively bite-sized, if you take them all together, it's going to take some time!

Wednesday, September 05, 2012

Beating the Averages by Paul Graham

Especially interesting for his description of the "Blub Paradox".

Wednesday, September 12, 2012

What Made Lisp Different by Paul Graham

A short essay that's an interesting list of parameters you could use to evaluate the power of a programming language. Easy read. <700 words.

Friday, September 21, 2012

Succinctness is Power by Paul Graham

What is "power" in a programming language? This article isn't a direct answer to that question, but it makes some interesting observations.

Wednesday, September 26, 2012

The Rise of "Worse is Better" by Richard Gabriel

"Unix and C are the ultimate computer viruses."

nuff said

Wednesday, October 03, 2012

Taking a slight detour away from Paul Graham and Lisp-ish things this week:

Teach Yourself Programming in Ten Years by Peter Norvig

Wednesday, October 10, 2012

This week, continuing last week's theme of becoming a better programmer: why working harder doesn't pay off.

Hard Work Does not Pay Off by Olve Maudal

Wednesday, October 17, 2012

Today, why things are the way they are (wrt programming language usage). This essay is longish, but there are sections you'll breeze through if you've kept up with previous weeks' reading:

Revenge of the Nerds by Paul Graham

"Suppose, for example, you need to write a piece of software. The pointy-haired boss has no idea how this software has to work, and can't tell one programming language from another, and yet he knows what language you should write it in. Exactly. He thinks you should write it in Java. Why does he think this?"

Thursday, October 25, 2012

I know this looks long. Just stick with it. It's probably the shortest reading so far.

First, I'm a day late this week, but I haven't forgotten. It's just been busy. This is the next-to-last reading material that I plan to send out (in this series, anyhow). Look for the big finale next week.

--- THE IMPORTANT BIT ---

Now for this week's provocation of thought... If you did the reading a handful of weeks back, entitled "Succinctness is Power", you might have caught an unlinked reference to a study that compared various aspects of some programming languages. I chased that study down. It's over a decade old, but some of its conclusions are still interesting. One of them is along the lines of the aforementioned article: "Designing and writing the program in Perl, Python, Rexx, or Tcl takes only about half as much time as writing it in C, C++, or Java and the resulting program is only half as long." This lends weight to the premise of the article, which is that a language that allows for shorter programs is a more powerful language.

--- /THE IMPORTANT BIT ---

If you can accept that paragraph after reading it twice, then consider this week's reading accomplished! Otherwise, you can find the study in its entirety (as a PDF) at http://page.inf.fu-berlin.de/~prechelt/Biblio/jccpprtTR.pdf.

And here's a quick summary of the study and its (other) conclusions. It found that when the same program was implemented in several languages by many different programmers:

  • No unambiguous differences in program reliability between the language groups were observed.
  • The typical memory consumption of a script program is about twice that of a C or C++ program. For Java it is another factor of two higher.
  • For the initialization phase of the phonecode program (reading the 1 MB dictionary file and creating the 70k-entry internal data structure), the C and C++ programs have a strong run time advantage of about factor 3 to 4 compared to Java and about 5 to 10 compared to the script languages.
  • For the main phase of the phonecode program (search through the internal data structure), the advantage in run time of C or C++ versus Java is only about factor 2 and the script programs even tend to be faster than the Java programs.
  • Within the script languages, Python and in particular Perl are faster than Rexx and Tcl for both phases.
  • For all program aspects investigated, the performance variability due to different programmers (as described by the bad/good ratios) is on average about as large or even larger than the variability due to different languages.

That is all.

Wednesday, October 31, 2012

As promised, this week marks the end of this reading series. As such, the reading is light and generally reflective and maybe not even relevant to the rest of the series. But it's fun. You'll find it at the end of this email.

The only "serious" reading for this week is this: if you've followed the reading for these last two months, you should be asking yourself, "Am I a Blub programmer? If I am, how can I not be?". You should also be able to answer yourself: "I need to take some time to find something up the power continuum from Blub and start learning it because spending time practicing my craft is the single best way to master it." You should also have some ideas about what makes a programming language "powerful" and realize that "the industry" rarely (never?) chooses the right language for the right reason. That means it's up to you to drag yourself beyond Blub and into some higher state of awareness.

That being the case, here are a few ways that I would recommend for you to continue your journey:

  • For the truly fortitudinous[1], start "Practical Common Lisp", available for free online. It will wreck everything you thought you knew about how to program: http://www.gigamonkeys.com/book/
  • For the slightly less intrepid, pick up a copy of Seven Language in Seven Weeks and pick your poison. Some of the languages in this book will also bend your mind in similar ways to Common Lisp: http://goo.gl/icLSy
  • Watch Coursera.org for the next offering of "Functional Programming Principles in Scala": https://www.coursera.org/course/progfun

Finally, this week's reading: Epigrams on Programming by Alan Perlis. They've been around for a long time, so some are dated and no longer relevant. Others are still extremely poignant. Read and enjoy.

[1] Yes, it's a word: http://dictionary.reference.com/browse/fortitudinous

Monday, November 28, 2011

What is JSR 348 (JCP.next)?

I was thinking about how to keep track of the articles I read and things I take time to learn about, like JSR 348, and I realized I should just stick it on my blog. That's kinda why it's here. So here we go: JSR 348 is nicknamed JCP.next because it revises the Java Community Process, which dictates how revisions to the Java platform are made. The final release was 18 October 2011, which means that every new JSR after that date is required to conform to the new requirements of JCP 2.8 (the new revision).

Generally, JSR 348 is focused on making sure in-progress JSRs are more transparent, are easier for anybody to get involved with, and keep moving at a reasonable pace so they don't stall out unfinished, like many have in the past. Just some of the measures include requiring that communication take place in a public forum, widening the pool of people who can be involved in JSRs to include just about anybody, and providing for removing uncooperative or unproductive (or just plain missing) members from the Expert Group of a JSR as well as replacing the Spec Lead if necessary. The Executive Committee was itself the Expert Group for JSR 348, and while working on the JSR, they followed all the guidelines that they themselves were putting into the new version of the JSR. Comments from the EC seem to indicate they felt it was a very positive experience.

Overall, it seems like an attempt initiated by Oracle to overcome some of the criticism after their takeover of Java from Sun and the fears that Java would become more closed and tightly owned by Oracle and possibly stagnate. From what I've seen while browsing around, there's a lot of hope that JSR 348 will be a success on those fronts. It only took a handful of months to complete and puts some pretty aggressive measures in place that look like they'll keep things open and moving along at a good pace. More importantly, it's gaining quite a lot of acceptance in the community, and it seems like people are breathing a sigh of relief at seeing Java looking like it's ready to come out of the quagmire it seemed to be in. Maybe the major releases will come along more often now and we can look forward to regular infusions of new language improvements?

A couple of the articles I read that are fairly short but give a good overview of the JSR:
The JCP Program Moves Towards a New Version: JCP.next
JCP 2.8 Ushers in a New Era of Complete Transparency

Wednesday, April 20, 2011

Effective Testing of a Jersey Resource Class

For a while now, I've been using Jersey for ReSTful web services with the resources classes and dependencies managed by Spring. I'm pretty avid about TDD, and a tricky question has been how to effectively test the JAX-RS resource classes. This post will take you on a brief journey through my evolving answer to this question. We'll examine the problems and considerations involved in testing as I describe several iterations of my testing approach, ending with my current approach, which I feel is very effective. Depending on your needs, you may find one of the earlier versions to be quite satisfactory.

Iteration One: The Jersey Test Framework

Just starting off, you're faced with the question of whether to just do pure unit tests of the resource classes or whether you should find a way to deploy them to a Jersey container so you can test the ReSTfulness of your resources. If you're choosing the former, then you probably wouldn't even be reading this post. If, on the other hand, you'd like to deploy your JAX-RS resource classes into Jersey and test the ReST calls thus created, then a quick search including the terms "test" and "Jersey" should point you toward the "Jersey Test Framework". I'm not going to cover that in this post, as it's fairly straightforward to set up and I know there are some other blogs out there that talk about it. Suffice it to say that the JerseyTest class from the framework lets you very easily start up a Grizzly container running your Jersey application that is very accessible to tests. Since I use Spring with Jersey, all I really have to do is pass in my existing Spring context files to load, and my Jersey app is running and ready to test. This is iteration one: your Jersey app deployed for testing just as it might be in production.

The first thing you'll notice when following this approach is that your test suite will quickly drag to a crawl. Especially as your app grows in complexity and number of tests, it will start taking longer to start up, and starting it up fresh for every single test method (the behavior of JerseyTest) is just out of the question. This led me to question of how often it's acceptable to start (or restart) the test container. It was the doorway leading to iteration two.

Iteration Two: Customizing the JerseyTest

On the matter of where and when to start a test container, once per test is far too often. Once per test suite is a little inflexible and would require some extra test code outside of the typical test class, which is all that I ever write. So I eventually decided that starting it once per test class wasn't too bad. You probably won't/shouldn't have a huge number of JAX-RS resource classes in one module/project anyway, and it's not likely you'd write more than one test class per resource; therefore, the number of startups needed to test a module will be kept fairly low. Additionally, it gives you the flexibility to configure the Jersey container specifically for each test class. Getting to this point just requires a little bit of extra test support code. That's the meat of iteration two: writing your own test support class to take control of the starting of the Jersey container. This class will ensure that the app is only started once and maintain the WebResource that's created by the JerseyTest to hand out to tests as needed. This is a fairly simple exercise that's left up to the reader, as there are far more interesting things to come.

As your project's complexity continues to grow, you'll begin to wonder about the wisdom of deploying your full application stack to the Jersey test container, and rightly so. First, it continues to slow down your tests by adding to startup time. More importantly, it can no longer be called anything like a unit test because in order to test your ReST calls, you'll need intimate knowledge of all that goes on under them, such as what constitutes valid input, possibly database constraints, and other preconditions that have to be satisfied in order for your ReST call to succeed. In short, your tests will become bloated, unreadable, and meaningless. Third, and possibly most insidious, if you're following your code metrics, a test of this type will give you all kinds of bogus test coverage. While you're only intending to test the JAX-RS resource class, it's going to be touching lines of code all through your codebase and contributing to code coverage without really asserting anything about those lines. It can give a real false sense of security. This brings us to iteration three, and the really interesting part of this post.

Iteration Three: Sneaking in a Mock

What can you do to test your JAX-RS resource deployed into Jersey without testing all the stuff that's normally under it in a real deployment? That sounds like a job for mocks! But wait! Our application is deployed inside a servlet container that's running in our tests, and there's no way to get access to stuff "inside" it. Specifically, since I just pass in a list of Spring XML files and Jersey starts the ApplicationContext itself, how can I inject a mock into the context? Its lifecycle is completely out of my hands. It's hidden away inside Jersey in the Grizzly container. Believe me. I dug around the source code for a few hours, and there's no easy way to get the Spring ApplicationContext out of there. So it's time to get a little inventive. First, let's make up some brief sample code. Suppose you have a resource class like this:

    @Path("/foo")
    class FooResource {
        @Autowired
        private FooService service;

        @GET
        @Path("{id}")
        @Produces(MediaType.APPLICATION_JSON)
        public Foo getFoo(@PathParam("id") int id) {
            return service.getFoo(id);
        }
    }

It depends on a service with this interface:

    interface FooService {
        Foo getFoo(int id);
    }

The question is how to get a mock of FooService that's available to your test wired into the FooResource bean, which is out of your test's control. There are certainly a number of possibilities out there, but I strive for simplicity. Here is the (first) simplest thing I could come up with:

    class FooTestService implements FooService {
        private static FooService delegate;

        public static void setDelegate(FooService delegate) {
            FooTestService.delegate = delegate;
        }

        @Override
        public Foo getFoo(int id) {
            if (delegate != null) {
                return delegate.getFoo(id);
            }
            return null;
        }
    }

This is a class that lives with your tests. It implements, and therefore is-a, FooService. (Note that TestFooService, while more readable, is a BAD name for it, as the Maven surefire plugin assumes that any class beginning or ending with "Test" is a test class and will fail the build because of your "test class" without any test methods in it!) Since it's a FooService, it's eligible for wiring into a FooResource. It has a static field and setter, meaning that no matter where an instance of this thing is created, as long as we're in the same JVM (ClassLoader, really), we can inject a value as the "delegate", making it available to all instances of the class, wherever they may be. The delegate receives all calls made to any instance of this class. The if statement is there to provide some flexibility in use: you don't have to provide a delegate if you don't want/need to. By creating a test-specific Spring context file for our test, we gain the ability to inject a mock into a Spring context that is otherwise out of our control:

    <!-- file FooResourceTest-context.xml -->
    <beans>
        <context:annotation-config/>
        <bean class="com.foo.FooResource"/>
        <bean class="com.foo.FooTestService"/>
    </beans>

With this, our test can now do something like:

    class FooResourceTest {
        private FooService fooService;

        @Before
        public void setUp() {
            // start the Grizzly container with our test-specific context file
            ensureGrizzlyIsRunning("/FooResourceTest-context.xml");
            // create a mock of the FooService
            fooService = mock(FooService.class);
            // inject the mock
            FooTestService.setDelegate(fooService);
        }

        @Test
            ...
        }
    }

...and voila! We're now testing the FooResource in isolation. We can test indirect outputs, such as verifying that it calls the appropriate service method(s) at the appropriate time(s). We can provide indirect inputs by making our service mock return different Foo objects or throw exceptions. We've taken care of the three issues I mentioned in iteration two: your tests have minimal startup costs, they're very narrow in scope, and they don't create a bunch of test coverage out of thin air.

Everything will be fine with this approach for a while, but then you'll find that it's so useful that these *TestService classes are springing up everywhere. You might even have multiple copies of them to serve different modules. I had three separate copies of a particular one in my project. For the benefit these things give you, the cost is really quite small. It takes a bit of effort to write the first time, but there's hardly anything to them, and they virtually maintain themselves since adding an interface method immediately requires you to add the method to your *TestService classes. Still, when you get to where you have about 20 of them, many of which are duplicates, you'll become convinced that there is a definite pattern here that needs to be pulled out of the crowd. It took a couple of weeks sitting in the back of my mind to bring all the pieces together, but I finally arrived at iteration four, which so far looks like the end of the line for this particular puzzle.

Iteration Four: Generate Mock Containers Dynamically

For the final, long-term solution to this problem, I was looking for a single, simple class that could provide the functionality of all of these *TestServices that were floating around. I knew it would involve some kind of dynamic proxying or similar magic. I thought I might use something from Spring to help out. Finally, in a moment of clarity, it all came together, and it looked a bit like this (the code isn't in front of me right now, and I'm not checking this with a compiler, much less running it):

    class DelegatingProxyFactoryBean implements FactoryBean {
        private static Map<Class, Object> proxiesByType = new HashMap<Class, Object>();
        private Class proxyInterface;

        public void setProxyInterface(Class proxyInterface) {
            this.proxyInterface = proxyInterface;
        }

        public static void addProxy(Class proxyInterface, Object delegate) {
            Object proxy = Proxy.newProxyInstance(
            DelegatingProxyFactoryBean.class.getClassLoader(),
            new Class[] { proxyInterface },
            new DelegatingInvocationHandler(delegate));
            proxiesByType.put(proxyInterface, proxy);
        }

        @Override
        public Object getObject() {
            return proxiesByType.get(proxyInterface);
        }

        @Override
        public Class<?> getObjectType() {
            return proxyInterface;
        }

        @Override
        public boolean isSingleton() {
            return true;
        }

        static class DelegatingInvocationHandler implements InvocationHandler {
            private Object delegate;

            public DelegatingInvocationHandler(Object delegate) {
                this.delegate = delegate;
            }

            @Override
            public Object invoke(Object proxy, Method method, Object[] args) throws Throwable {
                try {
                    return method.invoke(delegate, args);
                } catch (InvocationTargetException e) {
                    throw e.getCause();
                }
            }
        }
    }

It's a fairly simple class, but it has some voodoo in it. Overall, it's just a FactoryBean. The layer of indirection provided by the factory pattern is just what we need so that we can configure one bean in a Spring context to be any type. The class combines static and non-static code in the same way the former FooTestService did to allow us to smuggle mock objects into a Spring context. Ultimately, the mocks become the output of the getObject() method, so they become beans in a Spring context that can be wired into other beans, just like we did in iteration 3.

The only code here that could be considered notable is in the addProxy() and invoke() methods. The addProxy method is analogous to the setDelegate() method of iteration 3. It's how we tell this thing about a mock that we want it to use and the interface for that mock--FooService in our previous example. When we give it the interface and mock, it creates a dynamic proxy based on the interface and stores the proxy away to be returned by the getObject() method later. The DelegatingInvocationHandler is what delegates method calls on the proxy to the mock. In iteration 3, we implemented all the interface methods ourselves, filling in the delegation code manually. Here, the invocation handler receives a method invocation and then turns around to invoke the same method on the mock object it was constructed with: no more code to write or maintain! The invoke method in the handler looks a little fishy because it catches and unwraps an exception. You might get a violation out of this if you run static analysis tools on it (throwing away a stacktrace). The reason is that when you invoke a method using reflection, if the invoked method throws an exception, it's wrapped in an InvocationTargetException. Since we want our proxies to behave exactly like the objects they're delegating to, we'll need to strip the outer InvocationTargetException from any exception being thrown. Here's what this thing looks like in use:

    <!-- file FooResourceTest-context.xml -->
    <beans>
        <context:annotation-config/>
        <bean class="com.foo.FooResource"/>
        <bean class="com.foo.DelegatingProxyFactoryBean">
            <property name="proxyInterface" value="com.foo.FooService"/>
        </bean>
    </beans>

    class FooResourceTest {
        private FooService fooService;

        @Before
        public void setUp() {
            // create a mock of the FooService
            fooService = mock(FooService.class);
            // inject the mock
            DelegatingProxyFactoryBean.addProxy(FooService.class, fooService);
            // start the Grizzly container with our test-specific context file
            ensureGrizzlyIsRunning("/FooResourceTest-context.xml");
        }

        @Test
            ...
        }
    }

The changes here from iteration three are trivial: in the context file, we add our FactoryBean and tell it to spit out a FooService. In the test, we do exactly the same steps as before, but in a different order. The FactoryBean as written requires that the proxies be added before the Spring context starts.

Now, you might have noticed that the dynamic proxies aren't strictly necessary here. They're just a reflection of what we wrote manually in iteration 3. You could remove that bit and just have the factory bean return the mocks directly; however, with a bit more effort, you can easily make the mocks optional and, at the same time, remove the requirement of setting the mocks before starting Spring. As written above, getObject() will return null if no proxy has been added for the type set on the FactoryBean instance, and that's not very friendly. Just add some code there to create a proxy that has no delegate, and modify the invocation handler to return some default values if there's no delegate to invoke. Now it can generate test proxies with some basic behavior without needing a mock. Then alter addProxy() to inject mocks into existing invocation handlers instead of always creating new ones. Problems solved.

That's the end of the road. This DelegatingProxyFactoryBean--with the enhancements just discussed--allowed me to remove around 20 test classes of the iteration three variety. Furthermore, you'll notice that there are no references at all to "test" or "mock" in that class. It's generic enough that it could perhaps find its way into other use cases.

One final caveat: as I mentioned at least a couple of times, I haven't tested any of this code. I wrote it all in notepad++ with a couple of references to some javadocs. If you find problems, let me know, and I'll update this post.

Sunday, November 14, 2010

Tomatoes and Routers

No, this isn't a candidate name for a new blog.
Over the past several months, during which my blog has been deafeningly silent, I've been writing RESTful web services based on Jersey and Spring 3 organized as a Maven project of around 20 modules. The entire project has an almost fully-automated build and release cycle with Hudson and Sonar running nightly code analyses. Our test coverage is about 87%, and we're aggressively pushing it higher. Cyclomatic complexity per method is 1.6. There are a ton of things I could blog about, but I just haven't taken the time to think through the distinct topics I should write about. This post is not about that.

This post is about my new Linksys WRT54GL. While I may be the last person on earth to get one of these (>3k customer reviews on Newegg? seriously??), I'm still going to give a brief synopsis, because it's just really cool. Yes, it's an old model--a dinosar in computer years. Yes, it only supports 802.11g. But respectively: there's a reason it's both the most- and highest-rated of all wireless routers on Newegg, and do you *really* need 802.11n? If you need the most speed possible out of your wireless network, then give this a pass and go with a wireless n product. If, like me, you never find yourself wishing for faster wireless--and you're at least 23% geek--not to say I'm limited to 23%--and not to say I'm not (how many points do I get for reading a Lisp book in my spare time?)--then this could be the perfect router for you.
Initial setup was a snap. The first thing out of the box was a paper with size 72 font that said "Put the CD in first". That was pretty much all the instructions, and the CD takes you step by step through everything in simple, well-presented steps accompanied by pretty pictures. With just that, you've got a number of powerful options for setting up your network, though maybe nothing you wouldn't find in competing products. What really gives this router its strength is the fact that it's a deliberately open platform:
"The Linux-based Wireless-G Linux Broadband Router was created specially for hobbyists and wireless aficionados." --http://homesupport.cisco.com/en-us/wireless/lbc/WRT54GL
There are numerous third-party firmwares available for it. From a fairly small amount of research, it seems that you can turn this router into virtually anything, as long as it fits in the available memory:
  • FTP server via ProFTPD
  • Windows file sharing client and/or server via Samba
  • Welcome/login page and access restrictions for network access for use in a public setting via Chillispot
And that's only a small sample of the stuff you can run on it using just the DD-WRT firmware.
That being said, I passed on DD-WRT and went with Tomato. It doesn't have all the capabilities, but the installation instructions for Tomato are very short and sweet. DD-WRT seems a bit more involved. Even so, with Tomato, you get static DHCP reservations (strangely missing from the stock firmware), internal DNS and DNS caching for your network (DHCP and DNS provided by Dnsmasq), SSH/telnet access to the router's OS--Linux, btw--Samba client for making storage available to the router, great visibility into your network usage via live graphs and historical metrics reporting, and a slick AJAX web UI.
All of that, and installing Tomato really was as simple as downloading it and using the router's standard "update firmware" screen to install it! If you don't already have a WRT54GL, buy one. Or maybe two. If you do have one but haven't switched to a third-party firmware, do it now! You won't regret it! Unless your power goes out in the middle of flashing!

Saturday, August 14, 2010

Best Salsa Ever

My family got this salsa recipe from a little, then-hole-in-the-wall Mexican food restaurant in San Antonio, TX somewhere around 25 years ago. It's still the best salsa I've found anywhere, and the restaurant--now successful and with multiple locations--is still making the same salsa. Naturally, our version and the restaurant's have diverged over the years, but the ground rules have never really changed.

First, a disclaimer: I don't actually have or follow a recipe! I just know the ingredients that are needed, and I put them together until it tastes right. What I'm about to give you are the guidelines I use for getting all the ingredients and some rough guesses at the right quantities to make it all work together. If you follow this recipe exactly, you'll probably (I think) have something edible. If you want it to really shine, you'll need to take your time and pay attention to the "tuning" notes I've added at the end of this post. You'll also probably need to make it several times before you get the hang of it. What makes it difficult is that the amounts of ingredients you need change based on variations in the quality and flavor of the ingredients (primarily the tomatoes) that you get. Now, on with the show...

The Ingredients

I've split the list into "main" ingredients, which are what make up the body of the salsa, and "flavor" ingredients, which you use to tune the flavor to your personal preference. The quantities--or at least the ratios--of the main ingredients that I use are pretty well-known (by me, that is):

Main Ingredients
  • 8 lb Roma tomatoes
  • 2 lb slicing tomatoes
  • 1 lb Serrano peppers
  • 1 lb sweet onions

Note: With these quantities of main ingredients, you'll end up with very near the right amount of salsa to serve about 50 people at a rehearsal dinner where taco salad is being served (ask me how I know!) It probably makes around six or eight quarts. Make as much or as little as you want. The important part is to get the ratios right. You need about 10:1 by weight tomatoes to peppers and roughly the same amount of onion as pepper. In other words, one pound of peppers for ten pounds of tomatoes or 1/4 pound of peppers for 2.5 pounds of tomatoes. The onion should always be roughly the same as the peppers. If you cut back on the main ingredients, remember to scale back on the flavor as well!

Here's where it gets tricky. I've tried to guess at rough quantities here based on the quantities of main ingredients I listed above, but this is just that: a guess. Take it as such! I'll cover these ingredients more in depth in the step-by-step directions later:

Flavor Ingredients
  • 1/3 cu lemon juice (bottled recommended)
  • 1/3 cu lime juice (bottled recommended)
  • 3 T garlic powder
  • 3 T salt

The Tools

Gather up the following tools and take them to the place where you'll be working:

  • Cutting board
  • Knife--sharp and/or serrated
  • Tomato corer--great for coring the tomatoes but even better for deseeding peppers
  • Small bowl or nearby trash can for scraps
  • Two large bowls, each big enough to at least hold all the tomatoes--you can get by with only one, but two is better
  • Food processor--NOT a blender!
  • Spatula for scraping/stirring
  • Dish towel--spill/splash cleanup
  • One or two powder-free latex gloves--for protecting your hand(s) from the peppers
  • Chips for sampling the salsa to fine-tune the flavor

The Directions

  1. Pop the stems off the tops of the peppers and wash the tomatoes and peppers.
  2. Protecting your hands with the gloves, halve and deseed the peppers. Taking the seeds out greatly reduces the heat from the peppers. The tomato corer is great for removing seeds from pepper halves. If you're feeling brave, skip this step. If you don't use gloves, you'll want to refrain from touching your eyes, nose, or any other sensitive areas for the next day or two.
  3. Core the tomatoes, and take off any other blemishes, bad spots, etc. Quarter the romas. As the slicing tomatoes are larger, you'll probably want to cut them into six or eight pieces. Put the tomato slices into one of the large bowls. If the tomatoes are excessively juicy, this lets the juice drain from them while they're waiting to be chopped up.
  4. On to the processing: chop up the peppers in the food processor pretty finely. You don't want it pureed, like a blender, but it's hard to get the peppers too fine in a food processor. High speed is fine for peppers.
  5. Cut the ends off of and peel the outer skin from the onions. Slice them up and chop them in the food processor, too. I save chopping the onions until right before I put them in to spare myself and others from the fumes. If your food processor is big enough, you can do the onions and peppers at the same time. I try to chop the onions less finely than the peppers, but that's not too important, either. High speed is fine here, too.
  6. Put the chopped peppers and onions in the second large bowl and cover them unless you want to drive everyone from the room you're working in.
  7. Now run the tomatoes through the food processor. You want to leave the tomatoes in larger chunks than the peppers: around 1/4 inch in size or larger. This may take some practice in your food processor. If you have a sharp blade, it's very easy to reduce the tomatoes to mush. Even the low speed of dual-speed food processors may be too fast. If so, you'll need to use the "momentary" setting and quickly flip it on and off to slowly dice up the tomatoes. As soon as you have no chunks left that are larger than about 1/2 inch, they're done. After chopping the tomatoes, pour them on top of the onions and peppers in that bowl.
  8. After all the tomatoes are chopped, you'll have some tomato juice in the bottom of the bowl that was holding them. You can add this if you like, but note that the salsa tends to float, so when you get to the bottom of the bowl, it'll be mostly juice. I usually just dump the extra tomato juice.
  9. Now for the hard part: the flavor ingredients. If you want to follow this recipe exactly (not recommended!), just add the lemon, lime, salt, and garlic in the amounts I listed and stir well with the spatula. Keep refrigerated, and let me know how it turns out. If you want truly great salsa, go on to the following paragraphs to learn the basics of how to balance the flavor ingredients with the main ingredients.

Fine Tuning

Instead of the full amount, start off by adding about three quarters of the amount of the flavor ingredients I listed, stir well, and then taste it. The exact amount isn't important. Just start slow and build up. Use the following guidelines to decide what to add:

  • If it tastes "flat", or too much like tomato, add more salt.
  • If it doesn't taste sweet enough, add some garlic and citrus (lemon and/or lime--your preference). Sometimes a touch of cumin, aka comino, will bring out the sweetness, too. Only use a little--like less that 1/4 teaspoon per cup of salsa. Some restaurants go crazy on the cumin in their salsa, and it just overwhelms everything else.
  • If it seems too hot, add citrus, especially lemon. It reduces the peppers' bite. Don't go overboard, though!
  • If you get too much lemon in, it can be balanced with salt up to a point. (Not sure if the same works for lime.)
  • If you get too much salt, citrus may help a little bit, but you'll probably have to pick up some more tomatoes to reduce the saltiness. A little bit of sugar or cumin can help here, too, mostly by masking the salt flavor, but I don't recommend it unless you've only gone a little bit over on the salt.
  • If it just doesn't taste quite "right", it probably needs more garlic. I can't help you any more there. I guess that one just comes with experience.
  • If you get to a point where you think you're adding too much of the flavor ingredients, and it still doesn't taste quite right, letting it sit for 1-2 hours--I recommend refrigerating ASAP--can change its flavor quite a bit as the ingredients blend. Just check it again later.

Additional Notes

If, while stirring the salsa, some light yellow foam forms on top, it means you're pretty close to the right mix of ingredients. I'm not sure what does that, but it always happens.

When made from good-quality ingredients, this salsa will keep for as long as 3 weeks in the refrigerator, but it's best if used within the first week or so. Freezing this stuff is a no-go. It basically turns to water with little scraps of tomato floating in it. Canning preserves it better than freezing, but don't attempt to can it unless you're familiar with the issues around safe home canning of tomatoes!! I've only canned it for one growing season so far, and the result was quite different from the fresh product, but it turned out well enough. I intend to keep working on it to see what I can do with it.

Some people like to add cilantro to their salsa. I admit that it does add a pleasant touch, but I generally don't find it to be worth the hassle. Cumin is another popular addition, but be careful: a little goes a long way.

You can use fresh lemons, limes, and garlic. If you know a fresher source of salt than the little cardboard container, let me know. Using all fresh ingredients can give the salsa a fuller flavor, but the flavor of citrus fruit and the potency of garlic varies widely from one store visit to the next. This makes it even more difficult to get the right blend of ingredients, and lemons with the wrong flavor can make a whole batch turn out poorly. That's why I nearly always use bottled lemon and lime juice and garlic powder. These products have a fairly uniform flavor and potency, so it's easier to get consistently good results with them.

Many salsa recipes call for Jalapeño peppers instead of Serannos, and if you enjoy the taste of Jalapeños you can use some here. Try using 50% Serrano and 50% Jalapeño. Just keep the tomato:pepper ratio correct, and don't leave out the Serranos entirely. They're part of the key to this recipe.

You may also be tempted to try different types of onion. I've tried a few, and I've never found any to be satisfactory. In fact, if I can't get sweet onions, I sometimes just go without any onion at all. Ideally, find some Texas 1015's.

On the other hand, feel free to experiment with different varieties of tomatoes. I list Roma and slicing tomatoes because that's generally what's easy to find at the grocery store, and using Romas as filler is cheaper than going with all slicing tomatoes; however, I've experimented with several other varieties of tomatoes including Phoenix, Celebrity, Better Boy, Beefsteak, and BHN 444. All of them made at least a decent salsa except for the Beefsteak. I didn't like the taste of that at all. Better Boy and Phoenix are my personal favorites so far. YMMV. Note that different tomato varieties will need different combinations of flavor ingredients--another complication!

Sunday, March 7, 2010

Setting Up a Thin Client Network with LTSP

I've become interested in thin client software as a potential way to use a cheap laptop as a mobile GUI for applications running on a more powerful desktop or server. Yesterday, I tried out one called the Linux Terminal Server Project (LTSP). I found out it's distributed with Edubuntu as a way for colleges and universities to set up thin client labs. I just wanted to see if I could get it working, and I was pleasantly surprised by how easy it was. My steps:

Install LTSP server

With VirtualBox, I created a VM with 512M RAM and a 5G dynamically sized hard drive. Following the Ubuntu Community Documentation, I downloaded the Ubuntu 9.10 alternate amd64 ISO to install the LTSP server on it. It was pretty simple: choose "Install an LTSP server" from the Modes menu, then "Install Ubuntu".

Realize I was supposed to have two network adapters

One thing I could've done differently to make things simpler later on: add a second network adapter to the VM. LTSP server wants two NICs: one for communicating with the outside world and one for communicating with the network where the clients are. There's supposed to be a way to configure it with just one NIC, but I belatedly added the second one and configured it myself rather than puzzling that out. The first adapter had the VirtualBox default setting of "NAT", which lets it communicate out through the host interface. When I created the second one, I set it to "Internal Network" so that it could only communicate with other VMs, where my client would be.

Make LTSP DHCP work with new adapter

LTSP server runs a DHCP server on the client-facing network interface. This is the first part of how LTSP clients start up: they get an IP address and some other instructions from the DHCP server on the LTSP server. Because I had only the one NIC at first, the DHCP server wouldn't start, either, and I had to figure out right network settings to make DHCP run on the second, "Internal" adapter.

Create LTSP client VM

After the DHCP server was working, I created a second VM with 512M RAM and no hard drive. I set its sole network adapter to "Internal Network" so that it would be able to see the second adapter of the LTSP server, where the DHCP server was running. Then I set it to boot only from the network and started it up. It almost immediately got a response from the DHCP server, but it failed with this error:

PXE-T01: File not found
PXE-E3B: TFTP Error - File not found.

Fix bad path in LTSP DHCP configuration

This one took me some research to figure out, but it turns out that it's just a simple path problem. In the DHCP configuration of the LTSP server, there's a line that gives a path to a file called "pxeconfig.0", which is the initial file that has to be retrieved for a network boot to work. The server tells the client via DHCP where that file lives, and the client uses TFTP, "trivial file transfer protocol", to retrieve it. For some reason, the path to the file was wrong when I installed it. I don't remember what it was set to at first, and I don't have the correct value in front of me. It was something like /var/lib/tftpboot/ltsp/amd64/pxelinux.0. Also, something does some "chroot"ing somewhere, making the correct path for requesting the file /ltsp/amd64/pxelinux.0 (I think). As soon as I got that right and restarted the DHCP server, I was good to go.

Create a user with a strong enough password to suit LTSP

One last thing: after fixing the path to pxelinux.0, the diskless VM was able to get all its required files and boot to a login screen, but I couldn't log in to the client with the same credentials I was using on the server. I had created a user with a trivial password because this was just a test VM, and I read something somewhere about LTSP having different password requirements than Ubuntu, so I made another user with a stronger password, and then I could log in. That's it. Now I can start the client VM, which boots from the network, and get to a Gnome desktop. Everything I run displays on the client but runs and stores files on the server. Pretty neat.