Time for an update on my research, where I'm still working on the odd idea that I've been speaking about. In my last installment, I had what seemed to be good results from an experiment, and I was getting ready to set up some more control runs to see if things would behave as they should.
Well, those reactions are going right now, thanks to the efforts of a colleague in Biology. These experiments will be running all night (Monday EST) and finish off around lunchtime tomorrow. Then I'll take the solutions down to another colleague in our analytical department, and in the next few days, she'll tell me what's happened. The wait will not be an easy one. I can tell that already.
That's because this batch of experiments is actually a pretty strenuous interrogation. I've tried to set it up so that good results can come out of it only if there's something real going on. I think there is, of course, but it's impossible to say for sure. My results from the first experiments could be characterized as "consistent with my hypothesis," but that's all. Mind you, that's a lot better than the alternative, hoo boy, but there could be other (less interesting!) things they're consistent with.
But the reactions taking place tonight should sort things out, but good. This run has four different parts to it: There's a repeat of the most promising conditions from the first experiment, just to ask the most basic question (reproducibility.) A distressing number of interesting experimental results never poke their heads up again, so that's one hurdle. In the second part, there's a set of conditions that should cause a larger effect than I saw the first time. This attempt is being racheted up in two seperate steps. If it goes up nicely each time, I'll be very happy. If the results come back one-up, one-down, I'll be staring out the window a lot, trying to figure that out. And if they show no effect, well. . .
The third part reverses field: it's an attempt to completely abolish the effect, by a mechanism that should be quite specific to my hypothesis. This one's in two steps, too, in another attempt to see a dose-response relationship. Having this one come through, which would revert the system to the same results coming from the corresponding blank experiment, would be strong evidence that I'm on the right track. The reverse holds true, too, unfortunately - if there's no effect here, my hypothesis has taken a torpedo right in its engine room. (That blank experiment is running tonight, too; it's an important control for all these tests.)
And the last ring of this circus is another attempt to make my desired effect disappear. I've changed a chemical structure in a way that should make very little difference to anything, except in the case of my hoped-for mechanism. It should shut that down pretty cleanly. It'll be hard to hold on to my current idea if this doesn't work as planned, either. I'll have to fall back on experimental error, which is not the first explanation you want to reach for, or some other variable that has completely escaped my notice. Neither of those is a good bet at this point; it'd be a lot simpler to assume that nothing interesting is happening at all.
For readers outside the research arena, those try-to-kill-it experiments are a powerful and commonly used technique. It's hard to run them sometimes, because it's hard to escape the mental picture of your new phenomenon, just arrived into the world of physical experience, being scared back into its hole by the sudden advent of search lights and sirens. But, you know, there are a lot of things to work on in this world. And if you don't figure out what's real and what isn't, you can spend most of your scientific career doing the equivalent of digging holes and filling them back in. It's hard on a hypothesis, being put to the test like this, and I'm here to tell you that it's not all that easy on the person behind the idea, either. If something's real, though, it'll show itself - it'll have to show itself - no matter what nasty questions you ask. Better to ask them up front.